[Stable Diffusion] Prompt Sharing and Learning Thread

Synalon

Member
Jan 31, 2022
225
663
can you give a link to Forge
Forge.png

There are 2 ways to install Forge, One click if you don't understand how to use git all that well or you can just use git as usual.



Guide on how to install it using both methods.


You are welcome.
 
Last edited:

silversalmon68

New Member
Mar 1, 2024
2
14
Thanks to Jimwalrus, I was able to get everything up and running and started experimenting late Thursday night into the wee hours like a demon. All images contain the necessary prompt data, but the idiotic thing is that I ended up closing out the Browser UI, and hence I cannot recall what Base Model I used, which sucks, as I wanted to refine these further. My 'Go Forward' approach is to screencap using the 'snipping tool' which is native to Windows and located under 'Windows Accessories'. I'll have to try and reverse engineer as I really like the BBW gals I generated.

I even practiced some technique refinement by directly plagiarizing Jim's amazing work from Page 1, to test the effectiveness of the "Restore Faces" functionality located on the 'Settings' tab within the UI. Thanks to everyone on this page for the help, I'm just a rookie, but this is a lot of fun. Fourth image is essentially how I envisioned Doris Day if she discovered the noon time Pizza Hut buffet (which is sorely missed, if you can forgive my editorializing). These are just directly out of stable diffusion, with native upscaling applied to the final image. I subscribe to an online image upscaler named Remini.ai, it does an amazing job at restoring the image quality of my collection of 35mm colour slides - for portraits, but it absolutely wrecks the detail of my vast collection of airliner and fighter aircraft slides.

00070-4039673012.png
00086-4039673012.png
00088-4039673012.png
00098-4039673012.png
00045-3521943629.png
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Thanks to Jimwalrus, I was able to get everything up and running and started experimenting late Thursday night into the wee hours like a demon. All images contain the necessary prompt data, but the idiotic thing is that I ended up closing out the Browser UI, and hence I cannot recall what Base Model I used, which sucks, as I wanted to refine these further. My 'Go Forward' approach is to screencap using the 'snipping tool' which is native to Windows and located under 'Windows Accessories'. I'll have to try and reverse engineer as I really like the BBW gals I generated.

I even practiced some technique refinement by directly plagiarizing Jim's amazing work from Page 1, to test the effectiveness of the "Restore Faces" functionality located on the 'Settings' tab within the UI. Thanks to everyone on this page for the help, I'm just a rookie, but this is a lot of fun. Fourth image is essentially how I envisioned Doris Day if she discovered the noon time Pizza Hut buffet (which is sorely missed, if you can forgive my editorializing). These are just directly out of stable diffusion, with native upscaling applied to the final image. I subscribe to an online image upscaler named Remini.ai, it does an amazing job at restoring the image quality of my collection of 35mm colour slides - for portraits, but it absolutely wrecks the detail of my vast collection of airliner and fighter aircraft slides.

View attachment 3407526
View attachment 3407529
View attachment 3407530
View attachment 3407531
View attachment 3407533
In order to see your settings etc and which ckpt model you used, simply load the image in png info and everything is right there including the ckpt model. You can send it to txt2img or img2img from png info and it will set everything for you. You only need to switch the ckpt manually. If you used postprocessing, you also need to enable it manually for some reason.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Thanks Mr. Fox - was able to locate the model and play around some more. Currently, really liking the revAnimated_v122EOL model. Just copied a few nice ones from civitai - these are just the direct output from Stable. Will look boss when run thru Remini.ai.

View attachment 3411350
There are several ways of upscaling in Stable Diffusion that most likely will give much better result than any external upscaler. It won't even come close.
Hiresfix in txt2img and SD Upscale in img2img tab. I have made posts about it in this thread. Seph might have linked to those posts on the 1st page.

*I see that you are using hiresfix. Go and get better upscalers. Ultrasharp and NMKD-Siax.




place them in Stable-Diffusion\Stable-Diffusion-WebUI-Forge\models\ESRGAN
If you don't have ESRGAN folder than simply create one.
Go to settings/upscaling and make sure " R-ESRGAN 4x+" is selected then press "apply settings". You also might need to reload the Ui.
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
A tip is to not have hiresfix enabled while generating and looking for a good image (seed). Then when you find an image you like press the little star button under the image without doing anything else. It will then use the hiresfix settings and jump straight to the hires steps and not generate the basic image, this means you will retain the original image better.
HiresfixGeneration button.png
Then send it to img2img, and go to script menu and select SD Upscale.
Now you will upscale the image one more time. Use a very low denoise setting, for a sharp image. You need to experiment a little to find the right denoise settings for you and your specific image as it can vary.
Try 0.1-0.2 as a start. The higher the denoise the more the image will change and vise versa, so you need to balance this.
The main resolution is now the tile size, and the multiplier in SD Upscale decides the final resolution based on the image you have in img2img.
This means that you can reduce the main resolution (tile size) and it will go quicker, only make sure the ratio is correct. Make sure to not use Face Restore or postprocessing while upscaling in img2img. It can create mask blend issues. It will show up as artifacts or pixelation around the head and hair.
 
Last edited:

rogue_69

Newbie
Nov 9, 2021
87
298
I'm just starting to learn to use ComfyUI. My main use will be animation. What are some good resources I could use to learn (or things to download)? I've loaded some workflows, but they always have nodes that I don't have, and there are usually no instructions on where to put the files.
 

theMickey_

Engaged Member
Mar 19, 2020
2,198
2,835
I've loaded some workflows, but they always have nodes that I don't have, and there are usually no instructions on where to put the files.
The first thing you should install with ComfyUI is the . It's used to install custom nodes without you having to download those nodes manually via github and then add them to your CimfyUI installation yourself.

Also: if a imported/downloaded workflow has red nodes, which means your installation is missing those nodes -- you can just use the ComfyUI-Manager to "Install Missing Custom Nodes".
 

Synalon

Member
Jan 31, 2022
225
663
I'm just starting to learn to use ComfyUI. My main use will be animation. What are some good resources I could use to learn (or things to download)? I've loaded some workflows, but they always have nodes that I don't have, and there are usually no instructions on where to put the files.
Youtube is a great resource for Comfy tutorials.
Look for Sebastian Kamph and/or Olivio Sarikas to get started, and there are many more.

If you browse through the video lists for the 2 I mentioned they show some nodes for animation, and how to download and set them up etc.
 

Nano999

Member
Jun 4, 2022
168
73
I deleted venv folder because my generation was slow to update SD, but when launched it saw this:
1710099591761.png
Do you know what's the matter?

Is it because of python?
1710143355539.png
1710143455976.png
1710143488223.png
 
Last edited:

me3

Member
Dec 31, 2016
316
708
I deleted venv folder because my generation was slow to update SD, but when launched it saw this:
View attachment 3430059
Do you know what's the matter?

Is it because of python?
View attachment 3431286
View attachment 3431289
View attachment 3431290
it doesn't find a matching version for torch to install. It's specifically looking for version 2.1.2 that fits your system and it's not gonna find one.
Far as i can tell atm pytorch 2.1.2 only supports up to python 3.11, you're seemingly using 3.12
You can either manually created the venv from the 3.10 version you seem to have installed or you can use a later torch version that supports 3.12.
Version 2.2.0 and 2.2.1 should support that. You're gonna need a later version of torchvision as well to match torch
 
  • Like
Reactions: DD3DD and Nano999

Nano999

Member
Jun 4, 2022
168
73
No idea why I was needed 3.12, so I just reverted back to 3.10 and now it's downloading venv folder
I used this method because I don't know how to update torch/torchvision :D
Hope this fixes my again arised issue with generating images with slowspeed
1710169084914.png
 

Nano999

Member
Jun 4, 2022
168
73
Oh well, the generation speed is a turtle, something wrong with the optimization
Deleing venv did not help
Any idea how to fix it?

All was working just fine like 3 weeks ago (30-60 sec at max for generating 1 pc)
And now it's slow as hell

1710171203621.png
1710171755810.png
1710171770735.png
1710171790570.png
1710171961373.png
1710171801809.png
 

Nano999

Member
Jun 4, 2022
168
73
I kinda found the core why it's so slow now

When I eject any lora the speed becomes a turtle, but if I remove the lora the speed is super fast (30-60 sec) as it should be and as it was before with any lora (or no matter how many loras are there)

Ha? Why :D

Also some checkpoints are super slow now, even with no lora ejected -_-

15 minutes for 1 image is crazy

I don't know, maybe to delete all loras? And upload them again 1 by 1 or smth?
 
Last edited:
  • Thinking Face
Reactions: DD3DD

me3

Member
Dec 31, 2016
316
708
I kinda found the core why it's so slow now

When I eject any lora the speed becomes a turtle, but if I remove the lora the speed is super fast (30-60 sec) as it should be and as it was before with any lora (or no matter how many loras are there)

Ha? Why :D

Also some checkpoints are super slow now, even with no lora ejected -_-

15 minutes for 1 image is crazy

I don't know, maybe to delete all loras? And upload them again 1 by 1 or smth?
loras can add A LOT, depending on what type it is and how many you use. Which sampler you use affect the time as well.
50 steps is generally too much though unless you're doing it for every specific reasons/conditions, you shouldn't need to use that many so you can save yourself some time there.
Why are you running with the skip cuda test option though, if there's some kind of issue with torch and cuda it could be why things are slower.
If this started after upgrading to 1.8 there could obviously be an issue with the update.
 
  • Like
Reactions: Nano999 and DD3DD

LongJohn77

Newbie
May 16, 2017
37
35
I was able to play decently with Stable-Diffusion with an AMD RX 5700XT (about 2 - 3.2 it/s, 512*512), but on linux (Ubuntu 22.04 with torch 1.13.1+rocm5.2 for compatibility). The only problem is that every image generated the memory of the PC is filled more and more until the desktop stops responding, forcing a restart (I have 32GB).

Is there any option to free memory?
 

Nano999

Member
Jun 4, 2022
168
73
loras can add A LOT, depending on what type it is and how many you use. Which sampler you use affect the time as well.
50 steps is generally too much though unless you're doing it for every specific reasons/conditions, you shouldn't need to use that many so you can save yourself some time there.
Why are you running with the skip cuda test option though, if there's some kind of issue with torch and cuda it could be why things are slower.
If this started after upgrading to 1.8 there could obviously be an issue with the update.
Ok, will then update torch and see how things go
 

sharlotte

Member
Jan 10, 2019
300
1,594
SDXL does beautiful pictures but I have always found it difficult to generate beautiful nude bodies. Not so much with clothes on, whether on comfyUI, A1111 or forge. Bit it does generate beautiful pictures - here some where I 'borrowed' the prompt from a French forum, used in comfyUI (please note some of the steps in the flow are disconnected as not needed for 'non human" representations) WowXL_00001_.png WowXL_00002_.png :