[Stable Diffusion] Prompt Sharing and Learning Thread

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
It's a bit hard to help debug since you're running a much newer card and setup than me and that means a whole bunch of new "oddities and quirks".
I'm only on a GTX 1070! Maybe there is a big Vram spike when the vae decoder is working, that I cannot spot that fast in the resource manager when it crashes..

Anyways, Comfyui is just l o v e. It's so much better than A1.

I just found a really cool feature, if you just drag & drop any image you created with comfyui into it, it will instantly load the entire workflow + seed and everything. So one can easily re-visit old pictures and make minor tweaks or more variations.

I just had one where I wanted to fix the hand, I just dropped it in, got the whole original setup instantly and just added some negative prompts to fix the hands. It's fantastic.

With A1 this was always a struggle for me and it had more steps inbetween.
 
  • Heart
Reactions: hkennereth

me3

Member
Dec 31, 2016
316
708
There actually is a downside to using the tiled version: it shifts the colors a little bit in the resulting image, adding contrast and saturation slightly but noticeably on each pass. I actually reported that as a bug like a year ago to Comfy's developer, and he said it was a known issue but he didn't had a fix for it... and so it was never fixed.

My recommendation is using the standard version unless you know that using the Tiled codecs are required.

Edit: here's an example of this color shift. It is subtle, but if you process the image multiple times it adds up.

View attachment 3315435 View attachment 3315436
You have a similar color shift in "ultimate upscaler" too, i'm assuming it's using a similar tiling method which would explain it. You "fix" that by using color matching.
The color differences may not be a bad thing though, specially not compared to not being able to render the image or your whole computer shutting down :p

Another thing with vae encoding/decoding, you have a "loss" when encoding and decoding, which is why you should try to keep the latent (or image) between nodes and not having to convert back and forth.

Edit:
Having run some tests with 2 tiled version and a "normal" vae decoding. Neither of the two tiled ones had a color shift, for that image in a single pass. IE it's a first time decoding on a latent. It might be additive and/or in some kind of addition condition that needs to be involved. This one on a XL model, with no controlnet, lora etc and no specified lighting conditions. Using the fp16 fixed sdxl vae.
 
Last edited:

hkennereth

Member
Mar 3, 2019
237
775
You have a similar color shift in "ultimate upscaler" too, i'm assuming it's using a similar tiling method which would explain it. You "fix" that by using color matching.
The color differences may not be a bad thing though, specially not compared to not being able to render the image or your whole computer shutting down :p

Another thing with vae encoding/decoding, you have a "loss" when encoding and decoding, which is why you should try to keep the latent (or image) between nodes and not having to convert back and forth.
Absolutely. The tiled codecs are magical and one of the main reasons I use Comfy to begin with; I was stuck with images no larger than ~1024 px images back when I was using A1111 and EasyDiffusion, and now this is the size I start my renders at before upscaling once or twice, while still using the same hardware. But these downsides are something to be aware of so you don't end with images that are very different than what you expected.
 
  • Like
Reactions: me3

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
Why do I get pixelated pictures when I set the denoise below 0.55 when upscaling? Even with increasing the steps from 20 to 30 it stays pixelated. Or do I need to increase the steps even higher? Running an attempt with 40 right now.
 

Jimwalrus

Well-Known Member
Sep 15, 2021
1,047
4,002
Why do I get pixelated pictures when I set the denoise below 0.55 when upscaling? Even with increasing the steps from 20 to 30 it stays pixelated. Or do I need to increase the steps even higher? Running an attempt with 40 right now.
Could you post an example image, with the gen data (or upscaler settings if done using standalone upscaler)? Thanks.
 
  • Like
Reactions: devilkkw

devilkkw

Member
Mar 17, 2021
324
1,094
is kkw-ph1 & its neg embeddings yours?
Yes, made it time ago.
For crash error, have you checked if it happens with other sampler?

Could you post an example image, with the gen data (or upscaler settings if done using standalone upscaler)? Thanks.
Yes, please share all data when you made a request like these, helping is difficult without details.
 
  • Like
Reactions: Jimwalrus

hkennereth

Member
Mar 3, 2019
237
775
Why do I get pixelated pictures when I set the denoise below 0.55 when upscaling? Even with increasing the steps from 20 to 30 it stays pixelated. Or do I need to increase the steps even higher? Running an attempt with 40 right now.
The low denoise value is not the reason, I can tell you that much. I upscale my images with denoise values between 0.2 and 0.4, depending on what I'm doing, never higher. I also never use more than 30 steps.

The cause is somewhere else, but I can't really say where with just that information. If you could share some examples of the issue, as well as more details of what you're using (settings for A1111, workflows for ComfyUI|), maybe we can help you figure it out.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,571
3,768
Why do I get pixelated pictures when I set the denoise below 0.55 when upscaling? Even with increasing the steps from 20 to 30 it stays pixelated. Or do I need to increase the steps even higher? Running an attempt with 40 right now.
Habib, do you understand how much more productive the conversation becomes when we can take your image, pop it into CUI and troubleshoot it for ourselves? Then instead of bunch of "maybes" we can go: "here, fixed this thing for you". But it kindaa has to start with you. I mean it in a supportive way ;)
 
  • Like
Reactions: Thalies

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Apparently it's considered a "controversial topic, politics or religion" to be for protecting innocence. SMH
If someone took offence to what I said then you are part of the problem, clearly.
 
  • Like
Reactions: theMickey_

Sepheyer

Well-Known Member
Dec 21, 2020
1,571
3,768
Apparently it's considered a "controversial topic, politics or religion" to be for protecting innocence. SMH
If someone took offence to what I said then you are part of the problem, clearly.
That snowflake fucking thing is getting out of hand with people not able to let the opposing view stand. I dunno what you wrote but whoever asked to delete what they disagree with is a filthy fag.

The only moderation action I support for this thread is giving a stern warning for not posting the prompt.
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
I will provide an example for my other question about the pixels when I use low denoise values later, but I got a quick question inbetween: Often comfyui starts to become somewhat buggy and doesn't start the queue from the beginning, but rather from here..

1706879885176.png

I then have to completely close and restart comfyui. Sometimes, the queue button just stops working and can't render anything until I completely restart comfyui.

Any idea why? Is it possible to choose where to start rendering and that I sometimes accidentally clicked into nowhere? But clicking on any node seems not to set a starting point..
 

me3

Member
Dec 31, 2016
316
708
I will provide an example for my other question about the pixels when I use low denoise values later, but I got a quick question inbetween: Often comfyui starts to become somewhat buggy and doesn't start the queue from the beginning, but rather from here..

View attachment 3318994

I then have to completely close and restart comfyui. Sometimes, the queue button just stops working and can't render anything until I completely restart comfyui.

Any idea why? Is it possible to choose where to start rendering and that I sometimes accidentally clicked into nowhere? But clicking on any node seems not to set a starting point..
Comfy "cache" the output from nodes, so if there's been no change in the earlier stages it just runs from the change ones.
Using your image as an example, if nothing changed for the first sampler it'll only start with the second one. It's very useful and time saving in most cases. Only times i've noticed it screws up is with some or the lora stack nodes and it doesn't pick up that their weights etc has been changed. (this might have been fixed, i haven't used it in a while)
 
  • Like
Reactions: Sepheyer

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
Comfy "cache" the output from nodes, so if there's been no change in the earlier stages it just runs from the change ones.
Using your image as an example, if nothing changed for the first sampler it'll only start with the second one. It's very useful and time saving in most cases. Only times i've noticed it screws up is with some or the lora stack nodes and it doesn't pick up that their weights etc has been changed. (this might have been fixed, i haven't used it in a while)
You mean because I picked "fixed" in the first sampler? This makes sense.. that it would run a new attempt if I switched to randomize.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,571
3,768
I will provide an example for my other question about the pixels when I use low denoise values later, but I got a quick question inbetween: Often comfyui starts to become somewhat buggy and doesn't start the queue from the beginning, but rather from here..

View attachment 3318994

I then have to completely close and restart comfyui. Sometimes, the queue button just stops working and can't render anything until I completely restart comfyui.

Any idea why? Is it possible to choose where to start rendering and that I sometimes accidentally clicked into nowhere? But clicking on any node seems not to set a starting point..
Exactly, what me3 said. I can only add: both your samplers have fixed seed, meaning CUI won't rerun if nothing changes. But if you change the seed on the first sampler to random, then CUI will render everytime you press the render button.
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
Exactly, what me3 said. I can only add: both your samplers have fixed seed, meaning CUI won't rerun if nothing changes. But if you change the seed on the first sampler to random, then CUI will render everytime you press the render button.
Seems not to work. I just did this:

1706885275755.png

As you can see, both samplers are on "fixed". So I only wanted to refine from the position that the low res preview picture down there has. But when I hit queue promt, it starts to generate a completely new one and starts from the beginning. And sometimes it would just start at the 2nd sampler and skip to re-generate a new one.
 
  • Thinking Face
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,571
3,768
Assuming nothing else changed - and you didn't accidentally change the seed by clicking on the sampler (which is very easy to do) I am surprised at this behavior. It shouldn't act like this and yet it does.

Out of curiosity - how come the lanes to the second sampler are all white while the same lanes to the first sampler have the proper widget color?
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
Out of curiosity - how come the lanes to the second sampler are all white while the same lanes to the first sampler have the proper widget color?
That was because I had clicked on the 2nd sampler, so it was marked and showed all connected cables in white.
 
  • Like
Reactions: Sepheyer

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
And here's the pixel problem.. I just increased the upscale from 2 to 3, set the denoise down to 0.40 (so that it sticks closer to the first low res generation) and increased the steps from 20 to 30.

This is what happens:

1706886513377.png

ComfyUI_00178_.png
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,571
3,768
And here's the pixel problem.. I just increased the upscale from 2 to 3, set the denoise down to 0.40 (so that it sticks closer to the first low res generation) and increased the steps from 20 to 30.

This is what happens:

View attachment 3319266

View attachment 3319267
Oh, I think this is easy. The latent upscale is meant to go to 2.0 max. So, you literally chain samplers to do 2x then 2x and then 2x again.
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
Oh, I think this is easy. The latent upscale is meant to go to 2.0 max. So, you literally chain samplers to do 2x then 2x and then 2x again.
I don't think so, when I use 3x or even 4x I actually get higher resolutions, I just can't bring the denoise lower than 0.55.. otherwise I get this blurred pixelated look.

The latent upscale is just a multiplier of the original resolution as far as I understand it.