[Stable Diffusion] Prompt Sharing and Learning Thread

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
I get the same blurred look. Strange-o. Hmm.
I am creating a 3x version with 0.55 denoise now and post the result then, you'll see it will be clear but higher res than the 2x I posted above..

"Funny" side note, this time Comfyui started right away with the process of upscaling, while it previously entirely generated the first picture new. Would be cool to know what determines this so that I have control over this, if anyone who's reading this knows the answer :D
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,571
3,768
What I wrote below is incorrect - hkennereth noticed my denoiser was 0.5.
---
May be something went completely tits up in your workflow and got corrupted and shit? I kept getting pixels when running yours, but then I replaced the whole thing with my old latent upscale only workflow and poped your prompts in, and everything worked fine (other than being weird AF):

a_18671_.png
 
Last edited:

hkennereth

Member
Mar 3, 2019
237
775
I am creating a 3x version with 0.55 denoise now and post the result then, you'll see it will be clear but higher res than the 2x I posted above..

"Funny" side note, this time Comfyui started right away with the process of upscaling, while it previously entirely generated the first picture new. Would be cool to know what determines this so that I have control over this, if anyone who's reading this knows the answer :D
Regarding the "pixelated" image, I don't believe that the latent upscale works at all with lower than 0.5 denoise values. I couldn't get it to ever work, that's probably a better way to describe it. And I prefer to use the upscale with model technique anyway.

As for your next question: did you tweak the prompt at all? As pointed out, Comfy will only reset the cached nodes if anything changes before them, so if you change the prompt than every single rendering that depends on that prompt will need to be redone. That is true for every single parameter of every node; if you change in one before, it will need to recalculate the following nodes.
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
May be something went completely tits up in your workflow and got corrupted and shit?
I would not know what. I posted my workflow, it's quite simple..

Here's the 3x upscaler with 0.55 denoise:
You don't have permission to view the spoiler content. Log in or register now.

As you can see, fantastic quality. But with everything over 2x and denoise below 0.55 I get that trash results..

Regarding the "pixelated" image, I don't believe that the latent upscale works at all with lower than 0.5 denoise values. I couldn't get it to ever work, that's probably a better way to describe it.
It does, I can go down to 45 or 40 as long as I keep the upscaler on 2x.

As for your next question: did you tweak the prompt at all? As pointed out, Comfy will only reset the cached nodes if anything changes before them, so if you change the prompt than every single rendering that depends on that prompt will need to be redone. That is true for every single parameter of every node; if you change in one before, it will need to recalculate the following nodes.
I only tweaked the denoise and upscale setting, but those come after the first generation and the preview picture. So it shouldn't redo all of that too.. how else am I supposed to tweak minor things in the upscale settings to refine my picture, if any little change immediately leads to a new generation?
 
  • Like
Reactions: Sepheyer

hkennereth

Member
Mar 3, 2019
237
775
May be something went completely tits up in your workflow and got corrupted and shit? I kept getting pixels when running yours, but then I replaced the whole thing with my old latent upscale only workflow and poped your prompts in, and everything worked fine (other than being weird AF):

View attachment 3319384
Yeah, it's because on your workflow you're using 0.5 denoise for the latent upscale. That's why it works. :)

I would not know what. I posted my workflow, it's quite simple..

Here's the 3x upscaler with 0.55 denoise:
You don't have permission to view the spoiler content. Log in or register now.

As you can see, fantastic quality. But with everything over 2x and denoise below 0.55 I get that trash results..



It does, I can go down to 45 or 40 as long as I keep the upscaler on 2x.



I only tweaked the denoise and upscale setting, but those come after the first generation and the preview picture. So it shouldn't redo all of that too.. how else am I supposed to tweak minor things in the upscale settings to refine my picture, if any little change immediately leads to a new generation?
I used the workflow directly from your image, and (switching to 2x upscale because my GPU is not that fast) this is what I got with the original 0.4 denoise:
1706889384950.png

And then, just changing to 0.5 denoise, same settings on everything otherwise:
1706889416978.png

I usually don't use latent upscale because it doesn't usually work well to preserve likeness of people that were created with loras, but if all you care is the final image and you're not trying to make it look like anyone in particular, it works great... but just need to set the denoise to 0.5 or higher.
 
  • Like
Reactions: Fuchsschweif

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
I used the workflow directly from your image, and (switching to 2x upscale because my GPU is not that fast) this is what I got with the original 0.4 denoise:
Hmmmm weird, now I get that look too, but the last time I tried it worked well. I still don't get why it happens though. Denoise of 0.40 means that the AI will just have to do fewer things new and takes more from the existing generation.. 0.55 makes too varied results if one wants to stick closer to the original, it must be possible to go lower.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,571
3,768
Regarding the "pixelated" image, I don't believe that the latent upscale works at all with lower than 0.5 denoise values. I couldn't get it to ever work, that's probably a better way to describe it. And I prefer to use the upscale with model technique anyway.

As for your next question: did you tweak the prompt at all? As pointed out, Comfy will only reset the cached nodes if anything changes before them, so if you change the prompt than every single rendering that depends on that prompt will need to be redone. That is true for every single parameter of every node; if you change in one before, it will need to recalculate the following nodes.
Aight here is a "fix" - use different latent upscale called NNLatent it works for ~0.2 and up in exchange for dropping some details.

ComfyUI_00011_.png
 
Last edited:

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
If I want a let's say 2K picture in 16:9 ratio (for youtube videos), what's the starting resolution I need to in the empty latent image?
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957

It's what I use to answer these questions whenever I have them.
1280x720 is kinda high to start out.. but I guess I can just start with half these values, if I am correct that 2x upscaling means doubling the resolution..
 

hkennereth

Member
Mar 3, 2019
237
775
1280x720 is kinda high to start out.. but I guess I can just start with half these values, if I am correct that 2x upscaling means doubling the resolution..
When rendering with SD1.5 based models, I always keep one of the dimensions 512px, and just increase the other to get the screen ratio I want.

For SDXL there's actually a slightly different math since they recommend that you make images with more or less the same average number of pixels as a square 1024 x 1024px image, so if you increase one edge to change the ratio, the recommendation is decreasing the other as well. So to help with that, there is a custom node I have installed that has a list of preset pixel ratios that you input into the Empty Latent Image node. Like so:
1706895838124.png
 
  • Like
Reactions: Mr-Fox

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
When rendering with SD1.5 based models, I always keep one of the dimensions 512px, and just increase the other to get the screen ratio I want.
But YYYYx512 upscaled by two wouldn't make 1080, would it? That would equal 1024. That's why I thought I have to use values that, doubled or tripled up, create the desired target resolution.

Or am I getting wrong how upscaling works.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
But YYYYx512 upscaled by two wouldn't make 1080, would it? That would equal 1024. That's why I thought I have to use values that, doubled or tripled up, create the desired target resolution.

Or am I getting wrong how upscaling works.
I believe there might be an option for upscaling to a target resolution rather than using multiplier. I have no idea how it works in ComfyUi though.. :geek: If not then maybe you can use a factor of 2.1 etc .
 
Last edited:
  • Like
Reactions: Sepheyer

hkennereth

Member
Mar 3, 2019
237
775
But YYYYx512 upscaled by two wouldn't make 1080, would it? That would equal 1024. That's why I thought I have to use values that, doubled or tripled up, create the desired target resolution.

Or am I getting wrong how upscaling works.
No, you're not wrong. But you can upscale both by a multiplier (like 2x), or to an exact size as well, if you want to go that route.
1706908213769.png
It's also not the end of the world if you use a slightly different image size to start with, I said I always start with a 512px because... that's what works for my needs, I usually don't need to upscale directly to a discreet screen size, and sticking to 512px or close avoids issues with character duplication. I'll usually upscale past the point I need, and downscale/crop later if needed.

But you can just start with a 960 x 544 px latent image (the values need to be multiple of 64, so no option to start at 540 for a direct 2x upscale to 1080), and crop the image from 1088 to 1080px later. There's even a node to do that directly inside Comfy, I don't think the 8px are worth trying to crop it in Photoshop or similar to choose which way to crop.
1706909004801.png
 

me3

Member
Dec 31, 2016
316
708
So i'm struggling slightly with providing much details on this, i've included the prompt i started with, but i'm not sure how much help it'll be.
The reason i guess can be summed up with "inpainting"...
The general (and very repeating) workflow is just (re)loading image, drawing mask(s) and painting in more/new/replacing.
Somewhere along the line i've managed to get a white line at the top and bottom, no idea when or why that showed up.
Then when i did the final slight denoising and upscaling to "blend" things, it made it look like she has dried snot coming up and out from her left nostril...classy lady, probably has bigger things on her mind.

inpaint_0001.jpg

You don't have permission to view the spoiler content. Log in or register now.
 
  • Like
Reactions: VanMortis

hkennereth

Member
Mar 3, 2019
237
775
So i'm struggling slightly with providing much details on this, i've included the prompt i started with, but i'm not sure how much help it'll be.
The reason i guess can be summed up with "inpainting"...
The general (and very repeating) workflow is just (re)loading image, drawing mask(s) and painting in more/new/replacing.
Somewhere along the line i've managed to get a white line at the top and bottom, no idea when or why that showed up.
Then when i did the final slight denoising and upscaling to "blend" things, it made it look like she has dried snot coming up and out from her left nostril...classy lady, probably has bigger things on her mind.

View attachment 3322590

You don't have permission to view the spoiler content. Log in or register now.
I believe the technical description of the cause of those lines around the image is "it do be like that sometimes".

It just happens. Sometimes on first generation images, sometimes due to upscaling... but no particular reason, it's just an SD thing. If it bothers me too much I use something like Context-Aware Fill on Photoshop or Photopea to remove it on the final image. Most times I don't even bother.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
So i'm struggling slightly with providing much details on this, i've included the prompt i started with, but i'm not sure how much help it'll be.
The reason i guess can be summed up with "inpainting"...
The general (and very repeating) workflow is just (re)loading image, drawing mask(s) and painting in more/new/replacing.
Somewhere along the line i've managed to get a white line at the top and bottom, no idea when or why that showed up.
Then when i did the final slight denoising and upscaling to "blend" things, it made it look like she has dried snot coming up and out from her left nostril...classy lady, probably has bigger things on her mind.

View attachment 3322590

You don't have permission to view the spoiler content. Log in or register now.
I would suggest the same as hkennereth. Simply fix those things in photoshop, it's much easier and faster than trying to chase down the issue and then try to get the same image again. Knowing SD, it's not gonna happen..
For the white edges I would either use fill (content aware) or simply crop it. For the "snot", I would try the stamp tool or possibly fill (content aware).
 
  • Like
Reactions: hkennereth

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
Hey guys, I've got this workflow for upscaling purposes but it crashes my computer.

I'd need a latent output so I can create a tiled vae encoder between the sampler and the final saved image, in order to prevent these crashes. But there's only an image output from the SD Upscale.

Any idea what I could put in there in order to get through a tiled VAE encoder?

1707072421453.png