[Stable Diffusion] Prompt Sharing and Learning Thread

Jimwalrus

Well-Known Member
Sep 15, 2021
1,045
3,994
What are the best upscalers for very sharp and good results? I got some of the NMKD ones but they all suck so far. (Or it's my skill issue)
I find ESRGAN_4x to be very forgiving.
Every time I've done side-by-sides with others it's as good or better. It may also be my lack of skill in getting good results with those other upscalers, but that kind of proves my point that it's very forgiving.
 
  • Like
Reactions: sharlotte

me3

Member
Dec 31, 2016
316
708
Something to remember with upscalers, they as well are made with certain things in mind. So you need to pick upscaler suited for the "type" of image you're applying it too.
 
  • Like
Reactions: Mr-Fox

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,955
This is what I generally get with ESRGAN_4x, it's still a bit "soft"and washed out. I'd like to have the images more crisp and sharp.

1697472773540.png

4x Ultrasharp:

1697473039352.png

Both at 20 sample steps + 40 hires steps.

Doesn't get sharper with 30/60:

1697473738355.png
 
Last edited:

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,955
With higher resolutions it gets definitely sharper. But since resolution isn't the only factor, is there a way to get sharper outputs at 1024x1024 already?

Here's upscaled with 2.5 times instead of 2 (from 512x512), especially the detailed hair sticks out:

00012-1352350894.png

And here's 3x upscaled (from 512x512)

00013-966429224.png
 
  • Red Heart
Reactions: sharlotte

me3

Member
Dec 31, 2016
316
708
You might be able to find something . At the very least you can get an idea of what exist and what works better with which type of things
 

hkennereth

Member
Mar 3, 2019
237
775
Would you post examples of what those look like on ~1200x2400 renders?
Sure. I don't really have a side-by-side example here at hand, and they do take a little to render on my mid-range machine, so I'll just show some old renders I have.

This first one was upscaled with Siax:
fullres_00005_.png

And this one with UltraSharp:
fullres_00011_.png

Without a side-by-side, I understand it would be difficult to see the difference, but they both give me great results.
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,955
With higher resolutions it gets definitely sharper. But since resolution isn't the only factor, is there a way to get sharper outputs at 1024x1024 already?

Here's upscaled with 2.5 times instead of 2 (from 512x512), especially the detailed hair sticks out:

View attachment 3010939

And here's 3x upscaled (from 512x512)

View attachment 3010940
Meanwhile, this is also with 3x upscaling and 40 steps, but the face highly lacks in details..

1697476165817.png
 
Last edited:

hkennereth

Member
Mar 3, 2019
237
775
With higher resolutions it gets definitely sharper. But since resolution isn't the only factor, is there a way to get sharper outputs at 1024x1024 already?
Well, it depends on what you define as "sharp". The problem is that the level of detail decreases fast the smaller that "thing" is within the image because of the way Stable Diffusion works. It's really made to create well defined shapes around the size of the original render scale (512x512 px), and anything smaller than around 64x64 px is basically just "random brushstrokes to fake detail", so as the subject becomes smaller within this frame, the more SD starts "guessing" what its shape should be.

Upscalling and re-rendering through img2img is a hack to get SD to make subjects take more than that low limit. What you can do however is use tricks like inpainting at full resolution for things like faces, which in ComfyUI one can more easily accomplish with the FaceDetailer plugin, to re-render just that part at a higher resolution, increasing detail.

For example, on the image I posted above, that's the result of two-upscale steps. I initially render a first version at ~512 px² (just make one of the axis bigger depending on whether I want a landscape or portrait picture; in this case it was 512x640 px I believe), the upscale that 2x so it becomes about ~1024 px², then send it to FaceDetailer to make the face more faithful to my subject, and then finally run another 2x upscale to about ~2048 px². This final result is what I posted above.

Below you can see the result of the first upscale and the one post FaceDetailer. Sorry I didn't save the original render, and I don't have the original checkpoint used anymore. I'd say they have decent amount of detail, but not enough for my standards, which is why I upscale them once more even when I post them on something like Instagram, where resolution-wise the one below would be enough.

Upscaled to ~1024 px:
midres_00002_.png

FaceDetailer:
midres_dd_00002_.png
 
  • Like
Reactions: Fuchsschweif

hkennereth

Member
Mar 3, 2019
237
775
Actually I found that I have the very first render of this image saved... but that's because I was using this crazy complicated flow where I would render a very low res image without any LoRAs with another checkpoint that would result in more interesting poses, then use that as source for ControlNet so I could make an image with my LoRA that didn't just make boring poses. So the first render for the image above is this one, that just gave me the pose I wanted:

ComfyUI_00082_.png
 
  • Like
Reactions: DD3DD

sharlotte

Member
Jan 10, 2019
300
1,592
Not sure if this has ever been mentioned here, but there is a very good 'site' to help out with DOF settings, which can be used on SD and, I presume, ComfyUI, though I never tried it there:
 
  • Like
Reactions: me3 and Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I prefer NMKD over all others I have tried so far. Much more goes into the end result than only the upscaler.
I would recommend using a higher resolution to begin with. I often use 640x960 with SD1.5 and 2x upscale with hiresfix and I choose either NMKD superscale or NMKD Face. The denoising is very important for the result when upscaling, if you set it too high everything gets too smoothed out. The right amount of sample steps and hires steps is also fundamental. Then if you still want to take it a step further do an upscale in img2img but be very careful here with the denoising setting, also use more steps.
 
Last edited:
  • Like
Reactions: Jimwalrus

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,955
Mr-Fox hkennereth
I find yours too a bit blurry/washed out, especially in the faces.

What I mean by "sharp" is stuff like this:

1697495488717.png

1697495334779.png

This one is a bit low res but details are still sharp asf ("carved out"):

1697495026225.png
 
Last edited:
  • Red Heart
Reactions: sharlotte

hkennereth

Member
Mar 3, 2019
237
775
Mr-Fox hkennereth
I find yours too a bit blurry/washed out, especially in the faces.
That's a style thing and entirely dependent on your prompt, not really a render quality issue. I intentionally ask for bloom and glow on my pictures to give them a softer look, since it's more elegant and less like it was taken with a phone camera. But the process I use can do that just the same.

The images attached here use the same process, and are a bit more sharp, although perhaps not like your examples since that's simply not what I aim to make:
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Mr-Fox hkennereth
I find yours too a bit blurry/washed out, especially in the faces.

What I mean by "sharp" is stuff like this:

View attachment 3011733

View attachment 3011728

This one is a bit low res but details are still sharp asf ("carved out"):

View attachment 3011718
Compare the images I posted and your own of the same concept... My images are a big improvement over them if I might say so. Also I'm only using the scenario and concept you had posted earlier. If you keep moving the goalposts, none of us can help you. If this is the look you are going for, then find out which checkpoint they used to create the images. This is a completely different style and concept all together. You achieve sharpness by using as high resolution as possible from the beginning, depending on hardware and SD. Then do a very good upscale, with further refinement. I suggest that you try out SDXL checkpoints, Fenrisxl for example. This is the ckp I used for my recent Mista Fox, you can find by using the link below.

https://f95zone.to/threads/ai-art-show-us-your-ai-skill.138575/post-11967954

 
  • Like
Reactions: Sepheyer

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,955
This is a completely different style and concept all together.
It is but I wasn't talking about the style, but the clarity and sharpness of details. Two artworks must not share the same style in order to to be comparable in several subterms. I wasn't trying to offend any of you, I just think those pictures are on another level in terms of those two factors.. and as you said yourself earlier, we're all here to learn!


You achieve sharpness by using as high resolution as possible from the beginning, depending on hardware and SD. Then do a very good upscale, with further refinement. I suggest that you try out SDXL checkpoints, Fenrisxl for example. This is the ckp I used for my recent Mista Fox, you can find by using the link below.
I wanted to try higher but actually SD is starting to shutdown my computer again and again.. as soon as the upscaler kicks in at 3x settings (even though that worked fine earlier with the pics I posted here).

bluescreenviewer points at the RAM, which I find odd, since RAM isn't really much used with SD is it?

1697497777141.png

caused by "ntoskrnl.exe+41270"

:/
 

hkennereth

Member
Mar 3, 2019
237
775
It is but I wasn't talking about the style, but the clarity and sharpness of details. Two artworks must not share the same style in order to to be comparable in several subterms. I wasn't trying to offend any of you, I just think those pictures are on another level in terms of those two factors.. and as you said yourself earlier, we're all here to learn!
Alas, you are actually talking about style, because the stuff you're talking about doesn't have anything to do with upscaling, which is what lead us to this discussion in the first place. The type of visual you shown in your example pictures are the result of prompts with a focus on sharpness, checkpoints like that have a focus on crisp images, and high CFG image generation settings. Check out the images I shared and you can see that even at lower resolutions they still have very fine hair and skin texture, detailed eyes, etc, which is what you're looking for when you're talking about "sharpness and detail".

So the process can do that, now you need to prompt for it, change your settings to allow it, and use a checkpoint that enables it.
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,955
Alas, you are actually talking about style, because the stuff you're talking about doesn't have anything to do with upscaling, which is what lead us to this discussion in the first place. The type of visual you shown in your example pictures are the result of prompts with a focus on sharpness, checkpoints like that have a focus on crisp images, and high CFG image generation settings.
No, what lead us to this discussion was this question from me that you yourself quoted earlier:

With higher resolutions it gets definitely sharper. But since resolution isn't the only factor, is there a way to get sharper outputs at 1024x1024 already?
Clearly I was asking what else I can do to get sharper results, besides upscaling. Your post now has answered this question partially, so still thank you :p
 
Last edited:

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,955
I wanted to try higher but actually SD is starting to shutdown my computer again and again.. as soon as the upscaler kicks in at 3x settings (even though that worked fine earlier with the pics I posted here).

bluescreenviewer points at the RAM, which I find odd, since RAM isn't really much used with SD is it?
By the way @ all regarding this issue here,
I did quick research, someone else had a similar problem on reddit (without mentioning RAM though), someone else replied "maybe the picture is corrupted".

So instead of taking that one generation I like to png info -> send to txt2image; I made a fresh generation with the same settings but different prompts and no starting seed, and it worked fine.

Apparently "corrupted pictures" (seeds) is a thing..
 

hkennereth

Member
Mar 3, 2019
237
775
I made a fresh generation with the same settings but different prompts and no starting seed, and it worked fine.

Apparently "corrupted pictures" (seeds) is a thing..
In all my years (okay, year singular) using Stable Diffusion I have never heard of such a thing. And there is no such thing as generating without a starting seed either. You may not have used the same seed as whatever source prompt you got, but there is always a seed, it's what allows the generation of the initial noise field from which the image is created.