[Stable Diffusion] Prompt Sharing and Learning Thread

me3

Member
Dec 31, 2016
316
708
Been testing some detailing (and upscaling).
First image is just detailing parts of the image, in this case the woman and armor, while leaving the background untouched.
Second image is a iterative upscaling with very low and decreasing denoising. The background becomes very "busy", and when testing there was cases where it became almost oversaturated because the colors became rather heavy/dominating.

Looks like it might be more masks needed in the future...if only there was better mask editors in the UIs....

maskdet_00001_.jpg maskdet_00002_.jpg
You don't have permission to view the spoiler content. Log in or register now.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
Been testing some detailing (and upscaling).
First image is just detailing parts of the image, in this case the woman and armor, while leaving the background untouched.
Second image is a iterative upscaling with very low and decreasing denoising. The background becomes very "busy", and when testing there was cases where it became almost oversaturated because the colors became rather heavy/dominating.

Looks like it might be more masks needed in the future...if only there was better mask editors in the UIs....

View attachment 3307430 View attachment 3307429
You don't have permission to view the spoiler content. Log in or register now.
Please, can you post the workflow? I see this looks like image-2-image, or at least an OpenPose. Still, interested to see how iterative upscale was done.
 
  • Like
Reactions: DD3DD

me3

Member
Dec 31, 2016
316
708
Please, can you post the workflow? I see this looks like image-2-image, or at least an OpenPose. Still, interested to see how iterative upscale was done.
Will have to finish the current run first at least. Apparently heunpp seem to run HORRIBLY slow on SDXL, it didn't seem that bad on SD1.5...
Be warned though, it's a nice mess as it's not just those nodes. Might gives those noodle haters nightmare, so i guess there's an upside :p

Edit, trying to post a workflow image but forum won't let me, it's too big...*sigh*
 
Last edited:
  • Haha
Reactions: Mr-Fox and DD3DD

devilkkw

Member
Mar 17, 2021
294
992
There are different prompt interpreters in comfyui, one of which copies the one used in a1111.

There's also nodes that setup settings etc to be the same or close to a1111.

Another thing is that by default a1111 uses GPU for things like seed/randomness, comfy default is CPU. You can change this in both UIs and you can see there can be a very large difference between the two options. Comfy can do this on a per sampler node basis if you used the correct one.

Regarding prompting, i haven't used a1111 for XL and not checked the code, but i believe it still dumps the same prompt into both text encoders, there's people that argue this the absolute way of doing it, but if you try feeding different prompting styles or just parts of the prompt to each encoder you quickly see that you can use this to your advantage.
I really don't think ppl should lock themselves too much into one style or way of writing prompts, that will quickly turn into the mess ppl just keep dumping into their negatives.

Creativity is "freedom".
Very interesting, i saw sampler code and some difference are on it. But i'm not so skilled to understand every line of code, just i like see different approach to get good result.

Speaking by code, i made a mod for CUI dynamic prompt nodes, related to trick i posted yesterday( entire prompt weight).

Also made a fork and pull request on GitHub, but i'm not sure how it work because is first time i use fork and pull request.

So mod is simply, added a weight to node if weight is 1 nothing appen and you get standard prompt (see result in "Show text" node):
mod-dyn.prompt-2.jpg-w.jpg

But if you go up with weight, all prompt is automatically enclose and weighted:
mod-dyn.prompt-1.jpg-w.jpg
Maybe this is useless for someone, but for experimenting is good, i think.

Also about XL, i tested on a1111 v1.7 and in CUI, but i don't like XL so my test was fast, just for see if XL work.
Also in a1111 i switched back to 1.5.2, the only version don't cause memory problem on standard sd1.5 model.

Maybe in a future i'll try XL better, or wait some other update better than XL.
 

Fuchsschweif

Active Member
Sep 24, 2019
883
1,478
Learn ComfyUi part one of a series by Olivio Sarikas:

Sepheyer and me3, also a couple more are our local ComfyUi experts. I'm sure they will help you also.

Who knows me and Jim might be persuaded to give it a go also. I know that I for one is intrigued and curious about it but out of boneheaded-ness not taken the plunge yet. ;)
Thanks man! Found the time to work through the tutorial now. Needed to watch other ones for the installation first, but then went with your link. I set up a basic workflow with 3 different outputs. It works faster and smoother than A1 did in my memory, and personally I think the nodes and cables are waaaaay more fun.

When I copied the output windows two times (the 4 in a row) it for whatever reason did not connect some of the cables, but those that were missing were marked with a red circle. So I connected them again manually, I find that this visual cue is pretty nice and it shows that they thought about a good UX (at least so far).

I am not at how to use upscalers yet, but this is what my sessions looks like right now. I've been working with other software before, that also makes use of complex routings and modules / nodes, so working this way feels like home for me. And like I said, it's a lot of fun! Feels more personal than the clunky A1 UI.

1706573279869.png
 
Last edited:

me3

Member
Dec 31, 2016
316
708
May I ask where do you get that "show text" node?
Anyone knows how to use refiner in CUI, I haven't found that but I think there is a work around
I watch CUI on Youtube channel of Scott... while he had a series, I wish he made it more organized, like txt2img, img2img, IPAdapter, controlnet,...
If you have the manager installed, search for a node pack called "Comfyui custom scripts", it's made by someone named "pythongosssss" (might be a few s more or less). It has a bunch of useful nodes and other features.

As for refiner, if you're referring to the SDXL "second" model that was released along with the base, most other SDXL models don't used it, but you can use that and any model as "top finishing layer". There are some sampler nodes that has a input for sampler and a ratio setting for how many of the steps to use with just the refiner. I can't remember specifically atm as there's SO MANY sampler nodes.
Other option is to use 2 samplers then you can run the first for like 15 step and the second for 10. Some samplers have the option to set number of steps, start and end step, so if you set number of steps to 40, start at 0 and end at 25 in the first sampler, then 40, 25 and 40 in the second.
This is just one way to do it and these numbers are just picked and it's just one more thing you can figure out what works best for your prompt etc, but it might give you a hint how.
 

Fuchsschweif

Active Member
Sep 24, 2019
883
1,478
Thanks man! Found the time to work through the tutorial now. Needed to watch other ones for the installation first, but then went with your link. I set up a basic workflow with 3 different outputs. It works faster and smoother than A1 did in my memory, and personally I think the nodes and cables are waaaaay more fun.

When I copied the output windows two times (the 4 in a row) it for whatever reason did not connect some of the cables, but those that were missing were marked with a red circle. So I connected them again manually, I find that this visual cue is pretty nice and it shows that they thought about a good UX (at least so far).

I am not at how to use upscalers yet, but this is what my sessions looks like right now. I've been working with other software before, that also makes use of complex routings and modules / nodes, so working this way feels like home for me. And like I said, it's a lot of fun! Feels more personal than the clunky A1 UI.

View attachment 3307890

Unfortunately my computer still sometimes (not always) shuts off when I am using the upscaler. But now because of the nodes I can see where it exactly happens.. it's on the very last second when the upscaled picture reached 99% and would finally appear in the image view node / being saved on the computer.

Really weird. I thought previously it might have been a hardware related issue, but this seems like it crashes when SD tries to finalize/create the file.
 
  • Like
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
Unfortunately my computer still sometimes (not always) shuts off when I am using the upscaler. But now because of the nodes I can see where it exactly happens.. it's on the very last second when the upscaled picture reached 99% and would finally appear in the image view node / being saved on the computer.

Really weird. I thought previously it might have been a hardware related issue, but this seems like it crashes when SD tries to finalize/create the file.
You can troubleshoot it yet further - rather than saving to file you can keep the workflow at the preview level.

I trust you know, the CUI goes:

  1. prompt ->
  2. encode into latent ->
  3. latent manupulation (including upscale which is also a manipulation) ->
  4. decode latent into pixels ->
  5. preview (or/and save) ->
  6. pixel upscale (optoinal)

I think the most hardcore piece is #4 as it is the most demanding. I.e. you can safely juggle large latents but the decoding it into pixels is the true bottleneck. So, you can crop your workflow where it never gets to decode to test if the shutdowns continue.

Naturally, may be as you said the shutdowns take place strictly during saving. Then do disable the saving node and test the workflow again. If shutdowns stop, then you'll know there is something wrong with the actual input-output and might want to try workarounds.
 

devilkkw

Member
Mar 17, 2021
294
992
May I ask where do you get that "show text" node?
Anyone knows how to use refiner in CUI, I haven't found that but I think there is a work around
I watch CUI on Youtube channel of Scott... while he had a series, I wish he made it more organized, like txt2img, img2img, IPAdapter, controlnet,...
me3 answer propely.
Unfortunately my computer still sometimes (not always) shuts off when I am using the upscaler. But now because of the nodes I can see where it exactly happens.. it's on the very last second when the upscaled picture reached 99% and would finally appear in the image view node / being saved on the computer.

Really weird. I thought previously it might have been a hardware related issue, but this seems like it crashes when SD tries to finalize/create the file.
Did you check temperature? pushing out image is the part where your GPU/CPU is stressed, and if temperature go over certain limit( sometimes you set it on the bios) the pc shot down to prevent damage.
I will try that later, thanks! So instead of the save image node I replace it just with a preview node?
No, preview image work same as save image, but image is store in a temp folder that is cleaned every time you run CUI.
 
  • Like
Reactions: namhoang909

devilkkw

Member
Mar 17, 2021
294
992
For who like gpt chat, this is a simple workflow for using gpt model in cui.
It's also possible to connect it to image sampler to generate image, but i share a really base workoflow so you personalize it as you want. gptkkw.png

Install required node with CUI manager and enjoy.

Note: after install it's required to download gpt model in gguf format, and place in "ComfyUI\models\GPTcheckpoints".
A good place for download gguf model is .
 

Fuchsschweif

Active Member
Sep 24, 2019
883
1,478
Did you check temperature? pushing out image is the part where your GPU/CPU is stressed, and if temperature go over certain limit( sometimes you set it on the bios) the pc shot down to prevent damage.
Not with a tool, but by hand it felt warm but not overheated (not that you'd feel pain on your fingertips). It also only crashes when the upscaler is about to reach 100%, so that would be too much coincidence that it always overheats right in that second.


No, preview image work same as save image, but image is store in a temp folder that is cleaned every time you run CUI.
So what I am supposed to do to troubleshoot that further?
 
Last edited:

me3

Member
Dec 31, 2016
316
708
Not with a tool, but by hand it felt warm but not overheated (not that you'd feel pain on your fingertips). It also only crashes when the upscaler is about to reach 100%, so that would be too much coincidence that it always overheats right in that second.




So what I am supposed to do to troubleshoot that further?

Edit: People suggest to add " --lowvram " somewhere, but they never mention where (only found a thread about macos, but I am on windows).
If this happens with comfyui, which upscale node is?
There's also a log file, it might contain some detail on what happened if there was some kind of error.
If you install the node pack MTB it has some additional debug logging you can enable in the settings menu, not tried it so i don't know how useful it be, but it might give you some idea.


--lowvram you add to when launching comfy, if you start it through commandline you add it after the bat file name. if you start by double clicking the bat you need to edit it slightly and add the option at the end of the line "launching" main.py
 

Fuchsschweif

Active Member
Sep 24, 2019
883
1,478
If this happens with comfyui, which upscale node is?
The Ksampler again, here it crashes at 99%

1706655606577.png

--lowvram you add to when launching comfy, if you start it through commandline you add it after the bat file name. if you start by double clicking the bat you need to edit it slightly and add the option at the end of the line "launching" main.py
I just read a post that claims that the "newer" (7 months ago) version of comfyUI would automatically run in low vram mode for low vram cards.. so I think this isn't required anymore.
 

me3

Member
Dec 31, 2016
316
708
The Ksampler again, here it crashes at 99%

View attachment 3310601

I just read a post that claims that the "newer" (7 months ago) version of comfyUI would automatically run in low vram mode for low vram cards.. so I think this isn't required anymore.
There can be some memory spikes at the end of sampler operations, so i guess there's a chance it's related to a vram overflow or an offload to ram. There could be some kind of memory access violation, but i'm not sure why/how.
If your card is nvidia i'd recommend updating the driver and checking what the "memory overflow" setting is set to.
In the 3d settings of the nvidia control panel there should be a setting called something like "CUDA system fallback policy".
System fallback allows overflow from vram to ram if you "run out", no fallback obviously makes you get a OOM when running out of vram.
You can set this for comfy specifically by using the program specific setting and adding the python exe used by comfy. This is possibly found in the python_embeded folder in comfy.
 
  • Like
Reactions: Fuchsschweif

Fuchsschweif

Active Member
Sep 24, 2019
883
1,478
There can be some memory spikes at the end of sampler operations, so i guess there's a chance it's related to a vram overflow or an offload to ram. There could be some kind of memory access violation, but i'm not sure why/how.
If your card is nvidia i'd recommend updating the driver and checking what the "memory overflow" setting is set to.
In the 3d settings of the nvidia control panel there should be a setting called something like "CUDA system fallback policy".
System fallback allows overflow from vram to ram if you "run out", no fallback obviously makes you get a OOM when running out of vram.
You can set this for comfy specifically by using the program specific setting and adding the python exe used by comfy. This is possibly found in the python_embeded folder in comfy.
Thanks! I dowloaded Nvidia Studio instead of the game ready driver, after my last post, and for the past 6 generations I had no crash so far. Fingers crossed. If it happens again, I'll try your advice!

PS: My GPU is at 60-70°C at max so all fine in that deparment.

It seems to use 5 out of the original 8GB, and in another window the task manager shows 5/16GB. I don't know why it shows 16, maybe some sort of virtual vram. But both seem not to be at max.
 
Last edited: