[Stable Diffusion] Prompt Sharing and Learning Thread

namhoang909

Newbie
Apr 22, 2017
87
48
For who like gpt chat, this is a simple workflow for using gpt model in cui.
It's also possible to connect it to image sampler to generate image, but i share a really base workoflow so you personalize it as you want.

Note: after install it's required to download gpt model in gguf format, and place in "ComfyUI\models\GPTcheckpoints".
A good place for download gguf model is .
What is the name of the pack containing required node and how do you which model can generate NSFW content?
The Ksampler again, here it crashes at 99%

View attachment 3310601



I just read a post that claims that the "newer" (7 months ago) version of comfyUI would automatically run in low vram mode for low vram cards.. so I think this isn't required anymore.
1706660289848.png
I would recommend the "efficient pack", it may save you some node vae decode for example.

Edit1: for some unknown reason the text generation can do NSFW now.:D
You don't have permission to view the spoiler content. Log in or register now.
Edit2: there is setting that can turn noodle into straight line
You don't have permission to view the spoiler content. Log in or register now.
 
Last edited:

Fuchsschweif

Active Member
Sep 24, 2019
857
1,421
I would recommend the "efficient pack", it may save you some node vae decode for example.
With the Nvidia studio driver instead of the game ready I've been crash free for the last 20+ generations :) I hope it stays like that.

Edit: Nvm, crashed again now :cry: It again happened in the last bit when ksampler is done with upscaling and the VAE decode creates the final image.
Have to look into this tomorrow..

1706662427593.png
 
Last edited:
  • Sad
Reactions: Mr-Fox and Sepheyer

me3

Member
Dec 31, 2016
316
708
With the Nvidia studio driver instead of the game ready I've been crash free for the last 20+ generations :) I hope it stays like that.

Edit: Nvm, crashed again now :cry: It again happened in the last bit when ksampler is done with upscaling and the VAE decode creates the final image.
Have to look into this tomorrow..

View attachment 3310755
Just to try and narrow down what is going on, you could try replacing the vae decode node with a tiled version, leaving that at 512 tile size should remove any chance of vram overflow and it won't really have any negative impacts.
Another thing would be to removed both the save image and vae decode and just use a save latent node, that way you remove any vae and you still have a "save to output folder" action.

Not sure if you've already tried it but it could be worth seeing how things go in a different browser. Edge, chrome and opera are closely related, but they might have some small difference that is involved. Firefox will have more underlaying differences. So there's some options.

Have you tried searching on the comfyui github issues and/or discussions for your card? might be others that has had some kind of issue, might not have to be the same kind but the "fix" might be similar.
 
  • Like
Reactions: Fuchsschweif

Fuchsschweif

Active Member
Sep 24, 2019
857
1,421
Just to try and narrow down what is going on, you could try replacing the vae decode node with a tiled version, leaving that at 512 tile size should remove any chance of vram overflow and it won't really have any negative impacts.
Another thing would be to removed both the save image and vae decode and just use a save latent node, that way you remove any vae and you still have a "save to output folder" action.
Yes I read about that tiled vae decode note, I wanted to give it a try later. Right now I found out that the installation of another program changed my power plan on the computer for itself to a custom one. So I tried now the general "balanced" power plan vor AMD Ryzen CPUs and see if that changes something. I could also try the full power plan if that doesn't work out. And then, next up I'd try the tiled vae.

Does the final vae decode node only produce the preview picture in the save picture window? Or what's the difference to the save latent?

Not sure if you've already tried it but it could be worth seeing how things go in a different browser. Edge, chrome and opera are closely related, but they might have some small difference that is involved. Firefox will have more underlaying differences. So there's some options.
Hmm, good idea!

Have you tried searching on the comfyui github issues and/or discussions for your card? might be others that has had some kind of issue, might not have to be the same kind but the "fix" might be similar.
Not for my card specifically but for the issue of shutting down (some people have that even with the newest RTX 3090 models). But the solutions seems to be individual for all of them. One had another program causing the crashes that I don't have, for someone else it was the power plan changed to "balanced" because of his charging cable (however that's possible), and another one had some programmer extension going on that I don't even have.

But one said with the tiled vae it worked for him, so I saved that idea. Although I am skeptical, since my VRAM never been maxed out yet when I checked it.
 

Fuchsschweif

Active Member
Sep 24, 2019
857
1,421
Just to try and narrow down what is going on, you could try replacing the vae decode node with a tiled version, leaving that at 512 tile size should remove any chance of vram overflow and it won't really have any negative impacts.
This seems to work! At least I made 31 upscaled generations without any shutdown.
Does that mean my GPU had not enough vram? Despite my task manager showed 5/8 gb in usage, at peak?

Also, is there a downside that comes with tiled decode note? What does it do different?
 

devilkkw

Member
Mar 17, 2021
284
965
What is the name of the pack containing required node and how do you which model can generate NSFW content?
is the node needed, and NSFW is depending on model you download, i downloaded random model with low size (4Gb) and seem work with nsfw, but there are many model over 30Gb! but is too big for me.

This seems to work! At least I made 31 upscaled generations without any shutdown.
Does that mean my GPU had not enough vram? Despite my task manager showed 5/8 gb in usage, at peak?

Also, is there a downside that comes with tiled decode note? What does it do different?
Don't know if is standard, but in my CUI if memory required is not enough, it automatically switch to tiled VAE.
Also important is driver version, in the last driver you have option to chose if redirect mem to ram if not enough, you find it on Nvidia control panel, it's called CUDA Fallback.
 
  • Like
Reactions: namhoang909

namhoang909

Newbie
Apr 22, 2017
87
48
is the node needed, and NSFW is depending on model you download, i downloaded random model with low size (4Gb) and seem work with nsfw, but there are many model over 30Gb! but is too big for me.


Don't know if is standard, but in my CUI if memory required is not enough, it automatically switch to tiled VAE.
Also important is driver version, in the last driver you have option to chose if redirect mem to ram if not enough, you find it on Nvidia control panel, it's called CUDA Fallback.
is kkw-ph1 & its neg embeddings yours?
 

modine2021

Member
May 20, 2021
329
1,050
was looking for small breasts model of women. but this look barely legal. some questionable sample images if u scroll down far enough. got a feeling a few youngsters images were part of the training. it got banned on other model sharing sites. what say you? :unsure:
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,525
3,591
was looking for small breasts model of women. but this look barely legal. some questionable sample images if u scroll down far enough. got a feeling a few youngsters images were part of the training. it got banned on other model sharing sites. what say you? :unsure:
This passes f95's standards for renders being post-puberty.

Besides, most of the US politicians are on the Epstein's list for riding the "Lolita Express" to the "Lolita Island" and the US public keeps voting those parasites in, so morally it is fine even if it breaks the US's own laws. Nobody in the US gives a fuck anyways. And I am certain nobody here will object while the renders show chicks who are ready to go.
 

me3

Member
Dec 31, 2016
316
708
This seems to work! At least I made 31 upscaled generations without any shutdown.
Does that mean my GPU had not enough vram? Despite my task manager showed 5/8 gb in usage, at peak?

Also, is there a downside that comes with tiled decode note? What does it do different?
There generally isn't any downside to the tiled version, it can be faster in many cases, even if you have enough vram. All it does is break down the job into pieces and does one at the time.

I doubt it's that you run out of vram consider you have more than me and it does have a "fallback to tiled" if you run out.
But if reducing the load is enough to fix your problem it's at least a very simple fix. Unfortunately it doesn't help much with the "why".

It's a bit hard to help debug since you're running a much newer card and setup than me and that means a whole bunch of new "oddities and quirks".
It could be a driver thing, it could be that your card require some kind of additional python lib, or just some setting that needs tweaking.
If you haven't already it could be worth looking up more general "setup instructions" for your card in relation to SD and AI.
Or if someone here has the same card and could offer more insight.
 
  • Like
Reactions: Fuchsschweif

hkennereth

Member
Mar 3, 2019
228
738
There generally isn't any downside to the tiled version, it can be faster in many cases, even if you have enough vram. All it does is break down the job into pieces and does one at the time.

I doubt it's that you run out of vram consider you have more than me and it does have a "fallback to tiled" if you run out.
But if reducing the load is enough to fix your problem it's at least a very simple fix. Unfortunately it doesn't help much with the "why".

It's a bit hard to help debug since you're running a much newer card and setup than me and that means a whole bunch of new "oddities and quirks".
It could be a driver thing, it could be that your card require some kind of additional python lib, or just some setting that needs tweaking.
If you haven't already it could be worth looking up more general "setup instructions" for your card in relation to SD and AI.
Or if someone here has the same card and could offer more insight.
There actually is a downside to using the tiled version: it shifts the colors a little bit in the resulting image, adding contrast and saturation slightly but noticeably on each pass. I actually reported that as a bug like a year ago to Comfy's developer, and he said it was a known issue but he didn't had a fix for it... and so it was never fixed.

My recommendation is using the standard version unless you know that using the Tiled codecs are required.

Edit: here's an example of this color shift. It is subtle, but if you process the image multiple times it adds up.

midres_dd_0024.jpg midres_0021.jpg
 
Last edited:

Fuchsschweif

Active Member
Sep 24, 2019
857
1,421
It's a bit hard to help debug since you're running a much newer card and setup than me and that means a whole bunch of new "oddities and quirks".
I'm only on a GTX 1070! Maybe there is a big Vram spike when the vae decoder is working, that I cannot spot that fast in the resource manager when it crashes..

Anyways, Comfyui is just l o v e. It's so much better than A1.

I just found a really cool feature, if you just drag & drop any image you created with comfyui into it, it will instantly load the entire workflow + seed and everything. So one can easily re-visit old pictures and make minor tweaks or more variations.

I just had one where I wanted to fix the hand, I just dropped it in, got the whole original setup instantly and just added some negative prompts to fix the hands. It's fantastic.

With A1 this was always a struggle for me and it had more steps inbetween.
 
  • Red Heart
Reactions: hkennereth

me3

Member
Dec 31, 2016
316
708
There actually is a downside to using the tiled version: it shifts the colors a little bit in the resulting image, adding contrast and saturation slightly but noticeably on each pass. I actually reported that as a bug like a year ago to Comfy's developer, and he said it was a known issue but he didn't had a fix for it... and so it was never fixed.

My recommendation is using the standard version unless you know that using the Tiled codecs are required.

Edit: here's an example of this color shift. It is subtle, but if you process the image multiple times it adds up.

View attachment 3315435 View attachment 3315436
You have a similar color shift in "ultimate upscaler" too, i'm assuming it's using a similar tiling method which would explain it. You "fix" that by using color matching.
The color differences may not be a bad thing though, specially not compared to not being able to render the image or your whole computer shutting down :p

Another thing with vae encoding/decoding, you have a "loss" when encoding and decoding, which is why you should try to keep the latent (or image) between nodes and not having to convert back and forth.

Edit:
Having run some tests with 2 tiled version and a "normal" vae decoding. Neither of the two tiled ones had a color shift, for that image in a single pass. IE it's a first time decoding on a latent. It might be additive and/or in some kind of addition condition that needs to be involved. This one on a XL model, with no controlnet, lora etc and no specified lighting conditions. Using the fp16 fixed sdxl vae.
 
Last edited:

hkennereth

Member
Mar 3, 2019
228
738
You have a similar color shift in "ultimate upscaler" too, i'm assuming it's using a similar tiling method which would explain it. You "fix" that by using color matching.
The color differences may not be a bad thing though, specially not compared to not being able to render the image or your whole computer shutting down :p

Another thing with vae encoding/decoding, you have a "loss" when encoding and decoding, which is why you should try to keep the latent (or image) between nodes and not having to convert back and forth.
Absolutely. The tiled codecs are magical and one of the main reasons I use Comfy to begin with; I was stuck with images no larger than ~1024 px images back when I was using A1111 and EasyDiffusion, and now this is the size I start my renders at before upscaling once or twice, while still using the same hardware. But these downsides are something to be aware of so you don't end with images that are very different than what you expected.
 
  • Like
Reactions: me3

Fuchsschweif

Active Member
Sep 24, 2019
857
1,421
Why do I get pixelated pictures when I set the denoise below 0.55 when upscaling? Even with increasing the steps from 20 to 30 it stays pixelated. Or do I need to increase the steps even higher? Running an attempt with 40 right now.
 

Jimwalrus

Active Member
Sep 15, 2021
858
3,195
Why do I get pixelated pictures when I set the denoise below 0.55 when upscaling? Even with increasing the steps from 20 to 30 it stays pixelated. Or do I need to increase the steps even higher? Running an attempt with 40 right now.
Could you post an example image, with the gen data (or upscaler settings if done using standalone upscaler)? Thanks.
 
  • Like
Reactions: devilkkw

devilkkw

Member
Mar 17, 2021
284
965
is kkw-ph1 & its neg embeddings yours?
Yes, made it time ago.
For crash error, have you checked if it happens with other sampler?

Could you post an example image, with the gen data (or upscaler settings if done using standalone upscaler)? Thanks.
Yes, please share all data when you made a request like these, helping is difficult without details.
 
  • Like
Reactions: Jimwalrus

hkennereth

Member
Mar 3, 2019
228
738
Why do I get pixelated pictures when I set the denoise below 0.55 when upscaling? Even with increasing the steps from 20 to 30 it stays pixelated. Or do I need to increase the steps even higher? Running an attempt with 40 right now.
The low denoise value is not the reason, I can tell you that much. I upscale my images with denoise values between 0.2 and 0.4, depending on what I'm doing, never higher. I also never use more than 30 steps.

The cause is somewhere else, but I can't really say where with just that information. If you could share some examples of the issue, as well as more details of what you're using (settings for A1111, workflows for ComfyUI|), maybe we can help you figure it out.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,525
3,591
Why do I get pixelated pictures when I set the denoise below 0.55 when upscaling? Even with increasing the steps from 20 to 30 it stays pixelated. Or do I need to increase the steps even higher? Running an attempt with 40 right now.
Habib, do you understand how much more productive the conversation becomes when we can take your image, pop it into CUI and troubleshoot it for ourselves? Then instead of bunch of "maybes" we can go: "here, fixed this thing for you". But it kindaa has to start with you. I mean it in a supportive way ;)
 
  • Like
Reactions: Thalies

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Apparently it's considered a "controversial topic, politics or religion" to be for protecting innocence. SMH
If someone took offence to what I said then you are part of the problem, clearly.
 
  • Like
Reactions: theMickey_