[Stable Diffusion] Prompt Sharing and Learning Thread

devilkkw

Member
Mar 17, 2021
327
1,113
May I ask where do you get that "show text" node?
Anyone knows how to use refiner in CUI, I haven't found that but I think there is a work around
I watch CUI on Youtube channel of Scott... while he had a series, I wish he made it more organized, like txt2img, img2img, IPAdapter, controlnet,...
me3 answer propely.
Unfortunately my computer still sometimes (not always) shuts off when I am using the upscaler. But now because of the nodes I can see where it exactly happens.. it's on the very last second when the upscaled picture reached 99% and would finally appear in the image view node / being saved on the computer.

Really weird. I thought previously it might have been a hardware related issue, but this seems like it crashes when SD tries to finalize/create the file.
Did you check temperature? pushing out image is the part where your GPU/CPU is stressed, and if temperature go over certain limit( sometimes you set it on the bios) the pc shot down to prevent damage.
I will try that later, thanks! So instead of the save image node I replace it just with a preview node?
No, preview image work same as save image, but image is store in a temp folder that is cleaned every time you run CUI.
 
  • Like
Reactions: namhoang909

devilkkw

Member
Mar 17, 2021
327
1,113
For who like gpt chat, this is a simple workflow for using gpt model in cui.
It's also possible to connect it to image sampler to generate image, but i share a really base workoflow so you personalize it as you want. gptkkw.png

Install required node with CUI manager and enjoy.

Note: after install it's required to download gpt model in gguf format, and place in "ComfyUI\models\GPTcheckpoints".
A good place for download gguf model is .
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,166
1,993
Did you check temperature? pushing out image is the part where your GPU/CPU is stressed, and if temperature go over certain limit( sometimes you set it on the bios) the pc shot down to prevent damage.
Not with a tool, but by hand it felt warm but not overheated (not that you'd feel pain on your fingertips). It also only crashes when the upscaler is about to reach 100%, so that would be too much coincidence that it always overheats right in that second.


No, preview image work same as save image, but image is store in a temp folder that is cleaned every time you run CUI.
So what I am supposed to do to troubleshoot that further?
 
Last edited:

me3

Member
Dec 31, 2016
316
708
Not with a tool, but by hand it felt warm but not overheated (not that you'd feel pain on your fingertips). It also only crashes when the upscaler is about to reach 100%, so that would be too much coincidence that it always overheats right in that second.




So what I am supposed to do to troubleshoot that further?

Edit: People suggest to add " --lowvram " somewhere, but they never mention where (only found a thread about macos, but I am on windows).
If this happens with comfyui, which upscale node is?
There's also a log file, it might contain some detail on what happened if there was some kind of error.
If you install the node pack MTB it has some additional debug logging you can enable in the settings menu, not tried it so i don't know how useful it be, but it might give you some idea.


--lowvram you add to when launching comfy, if you start it through commandline you add it after the bat file name. if you start by double clicking the bat you need to edit it slightly and add the option at the end of the line "launching" main.py
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,166
1,993
If this happens with comfyui, which upscale node is?
The Ksampler again, here it crashes at 99%

1706655606577.png

--lowvram you add to when launching comfy, if you start it through commandline you add it after the bat file name. if you start by double clicking the bat you need to edit it slightly and add the option at the end of the line "launching" main.py
I just read a post that claims that the "newer" (7 months ago) version of comfyUI would automatically run in low vram mode for low vram cards.. so I think this isn't required anymore.
 

me3

Member
Dec 31, 2016
316
708
The Ksampler again, here it crashes at 99%

View attachment 3310601

I just read a post that claims that the "newer" (7 months ago) version of comfyUI would automatically run in low vram mode for low vram cards.. so I think this isn't required anymore.
There can be some memory spikes at the end of sampler operations, so i guess there's a chance it's related to a vram overflow or an offload to ram. There could be some kind of memory access violation, but i'm not sure why/how.
If your card is nvidia i'd recommend updating the driver and checking what the "memory overflow" setting is set to.
In the 3d settings of the nvidia control panel there should be a setting called something like "CUDA system fallback policy".
System fallback allows overflow from vram to ram if you "run out", no fallback obviously makes you get a OOM when running out of vram.
You can set this for comfy specifically by using the program specific setting and adding the python exe used by comfy. This is possibly found in the python_embeded folder in comfy.
 
  • Like
Reactions: Fuchsschweif

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,166
1,993
There can be some memory spikes at the end of sampler operations, so i guess there's a chance it's related to a vram overflow or an offload to ram. There could be some kind of memory access violation, but i'm not sure why/how.
If your card is nvidia i'd recommend updating the driver and checking what the "memory overflow" setting is set to.
In the 3d settings of the nvidia control panel there should be a setting called something like "CUDA system fallback policy".
System fallback allows overflow from vram to ram if you "run out", no fallback obviously makes you get a OOM when running out of vram.
You can set this for comfy specifically by using the program specific setting and adding the python exe used by comfy. This is possibly found in the python_embeded folder in comfy.
Thanks! I dowloaded Nvidia Studio instead of the game ready driver, after my last post, and for the past 6 generations I had no crash so far. Fingers crossed. If it happens again, I'll try your advice!

PS: My GPU is at 60-70°C at max so all fine in that deparment.

It seems to use 5 out of the original 8GB, and in another window the task manager shows 5/16GB. I don't know why it shows 16, maybe some sort of virtual vram. But both seem not to be at max.
 
Last edited:

namhoang909

Newbie
Apr 22, 2017
89
48
For who like gpt chat, this is a simple workflow for using gpt model in cui.
It's also possible to connect it to image sampler to generate image, but i share a really base workoflow so you personalize it as you want.

Note: after install it's required to download gpt model in gguf format, and place in "ComfyUI\models\GPTcheckpoints".
A good place for download gguf model is .
What is the name of the pack containing required node and how do you which model can generate NSFW content?
The Ksampler again, here it crashes at 99%

View attachment 3310601



I just read a post that claims that the "newer" (7 months ago) version of comfyUI would automatically run in low vram mode for low vram cards.. so I think this isn't required anymore.
1706660289848.png
I would recommend the "efficient pack", it may save you some node vae decode for example.

Edit1: for some unknown reason the text generation can do NSFW now.:D
You don't have permission to view the spoiler content. Log in or register now.
Edit2: there is setting that can turn noodle into straight line
You don't have permission to view the spoiler content. Log in or register now.
 
Last edited:

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,166
1,993
I would recommend the "efficient pack", it may save you some node vae decode for example.
With the Nvidia studio driver instead of the game ready I've been crash free for the last 20+ generations :) I hope it stays like that.

Edit: Nvm, crashed again now :cry: It again happened in the last bit when ksampler is done with upscaling and the VAE decode creates the final image.
Have to look into this tomorrow..

1706662427593.png
 
Last edited:
  • Sad
Reactions: Mr-Fox and Sepheyer

me3

Member
Dec 31, 2016
316
708
With the Nvidia studio driver instead of the game ready I've been crash free for the last 20+ generations :) I hope it stays like that.

Edit: Nvm, crashed again now :cry: It again happened in the last bit when ksampler is done with upscaling and the VAE decode creates the final image.
Have to look into this tomorrow..

View attachment 3310755
Just to try and narrow down what is going on, you could try replacing the vae decode node with a tiled version, leaving that at 512 tile size should remove any chance of vram overflow and it won't really have any negative impacts.
Another thing would be to removed both the save image and vae decode and just use a save latent node, that way you remove any vae and you still have a "save to output folder" action.

Not sure if you've already tried it but it could be worth seeing how things go in a different browser. Edge, chrome and opera are closely related, but they might have some small difference that is involved. Firefox will have more underlaying differences. So there's some options.

Have you tried searching on the comfyui github issues and/or discussions for your card? might be others that has had some kind of issue, might not have to be the same kind but the "fix" might be similar.
 
  • Like
Reactions: Fuchsschweif

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,166
1,993
Just to try and narrow down what is going on, you could try replacing the vae decode node with a tiled version, leaving that at 512 tile size should remove any chance of vram overflow and it won't really have any negative impacts.
Another thing would be to removed both the save image and vae decode and just use a save latent node, that way you remove any vae and you still have a "save to output folder" action.
Yes I read about that tiled vae decode note, I wanted to give it a try later. Right now I found out that the installation of another program changed my power plan on the computer for itself to a custom one. So I tried now the general "balanced" power plan vor AMD Ryzen CPUs and see if that changes something. I could also try the full power plan if that doesn't work out. And then, next up I'd try the tiled vae.

Does the final vae decode node only produce the preview picture in the save picture window? Or what's the difference to the save latent?

Not sure if you've already tried it but it could be worth seeing how things go in a different browser. Edge, chrome and opera are closely related, but they might have some small difference that is involved. Firefox will have more underlaying differences. So there's some options.
Hmm, good idea!

Have you tried searching on the comfyui github issues and/or discussions for your card? might be others that has had some kind of issue, might not have to be the same kind but the "fix" might be similar.
Not for my card specifically but for the issue of shutting down (some people have that even with the newest RTX 3090 models). But the solutions seems to be individual for all of them. One had another program causing the crashes that I don't have, for someone else it was the power plan changed to "balanced" because of his charging cable (however that's possible), and another one had some programmer extension going on that I don't even have.

But one said with the tiled vae it worked for him, so I saved that idea. Although I am skeptical, since my VRAM never been maxed out yet when I checked it.
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,166
1,993
Just to try and narrow down what is going on, you could try replacing the vae decode node with a tiled version, leaving that at 512 tile size should remove any chance of vram overflow and it won't really have any negative impacts.
This seems to work! At least I made 31 upscaled generations without any shutdown.
Does that mean my GPU had not enough vram? Despite my task manager showed 5/8 gb in usage, at peak?

Also, is there a downside that comes with tiled decode note? What does it do different?
 

devilkkw

Member
Mar 17, 2021
327
1,113
What is the name of the pack containing required node and how do you which model can generate NSFW content?
is the node needed, and NSFW is depending on model you download, i downloaded random model with low size (4Gb) and seem work with nsfw, but there are many model over 30Gb! but is too big for me.

This seems to work! At least I made 31 upscaled generations without any shutdown.
Does that mean my GPU had not enough vram? Despite my task manager showed 5/8 gb in usage, at peak?

Also, is there a downside that comes with tiled decode note? What does it do different?
Don't know if is standard, but in my CUI if memory required is not enough, it automatically switch to tiled VAE.
Also important is driver version, in the last driver you have option to chose if redirect mem to ram if not enough, you find it on Nvidia control panel, it's called CUDA Fallback.
 
  • Like
Reactions: namhoang909

namhoang909

Newbie
Apr 22, 2017
89
48
is the node needed, and NSFW is depending on model you download, i downloaded random model with low size (4Gb) and seem work with nsfw, but there are many model over 30Gb! but is too big for me.


Don't know if is standard, but in my CUI if memory required is not enough, it automatically switch to tiled VAE.
Also important is driver version, in the last driver you have option to chose if redirect mem to ram if not enough, you find it on Nvidia control panel, it's called CUDA Fallback.
is kkw-ph1 & its neg embeddings yours?
 

modine2021

Member
May 20, 2021
433
1,444
was looking for small breasts model of women. but this look barely legal. some questionable sample images if u scroll down far enough. got a feeling a few youngsters images were part of the training. it got banned on other model sharing sites. what say you? :unsure:
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,776
was looking for small breasts model of women. but this look barely legal. some questionable sample images if u scroll down far enough. got a feeling a few youngsters images were part of the training. it got banned on other model sharing sites. what say you? :unsure:
This passes f95's standards for renders being post-puberty.

Besides, most of the US politicians are on the Epstein's list for riding the "Lolita Express" to the "Lolita Island" and the US public keeps voting those parasites in, so morally it is fine even if it breaks the US's own laws. Nobody in the US gives a fuck anyways. And I am certain nobody here will object while the renders show chicks who are ready to go.
 

me3

Member
Dec 31, 2016
316
708
This seems to work! At least I made 31 upscaled generations without any shutdown.
Does that mean my GPU had not enough vram? Despite my task manager showed 5/8 gb in usage, at peak?

Also, is there a downside that comes with tiled decode note? What does it do different?
There generally isn't any downside to the tiled version, it can be faster in many cases, even if you have enough vram. All it does is break down the job into pieces and does one at the time.

I doubt it's that you run out of vram consider you have more than me and it does have a "fallback to tiled" if you run out.
But if reducing the load is enough to fix your problem it's at least a very simple fix. Unfortunately it doesn't help much with the "why".

It's a bit hard to help debug since you're running a much newer card and setup than me and that means a whole bunch of new "oddities and quirks".
It could be a driver thing, it could be that your card require some kind of additional python lib, or just some setting that needs tweaking.
If you haven't already it could be worth looking up more general "setup instructions" for your card in relation to SD and AI.
Or if someone here has the same card and could offer more insight.
 
  • Like
Reactions: Fuchsschweif

hkennereth

Member
Mar 3, 2019
239
784
There generally isn't any downside to the tiled version, it can be faster in many cases, even if you have enough vram. All it does is break down the job into pieces and does one at the time.

I doubt it's that you run out of vram consider you have more than me and it does have a "fallback to tiled" if you run out.
But if reducing the load is enough to fix your problem it's at least a very simple fix. Unfortunately it doesn't help much with the "why".

It's a bit hard to help debug since you're running a much newer card and setup than me and that means a whole bunch of new "oddities and quirks".
It could be a driver thing, it could be that your card require some kind of additional python lib, or just some setting that needs tweaking.
If you haven't already it could be worth looking up more general "setup instructions" for your card in relation to SD and AI.
Or if someone here has the same card and could offer more insight.
There actually is a downside to using the tiled version: it shifts the colors a little bit in the resulting image, adding contrast and saturation slightly but noticeably on each pass. I actually reported that as a bug like a year ago to Comfy's developer, and he said it was a known issue but he didn't had a fix for it... and so it was never fixed.

My recommendation is using the standard version unless you know that using the Tiled codecs are required.

Edit: here's an example of this color shift. It is subtle, but if you process the image multiple times it adds up.

midres_dd_0024.jpg midres_0021.jpg
 
Last edited: