[Stable Diffusion] Prompt Sharing and Learning Thread

me3

Member
Dec 31, 2016
316
708
Would you know why it doesn't drop the blank tokens altogether, i.e. the spaces? I am surprised those are even added to the list. Also, if you have the code still up, can you please let us know how this gets converted: (((test:1.3))).
This is the output from just the weight assignment, so there might be functions that drops "empty" array elements later. My guess is that it keeps everything for "completeness" so that you could technically just merge the [x][0] array elements back together if needed. I honestly haven't looked that the whole processing code for it, it's just a idea of possibility that popped in as "something it wouldn't surprise me if a coder had done".
Also, there's such as thing as token padding where i believe both pos and neg token gets padded to equal length, i don't know why this gets done nor what it affects. Not that unlikely the padding is just spaces so depending on where in the parsing that happens so not stripping spaces could be to maintain something related to that. Again, just a random thought.


As for your test, it comes to 1.573. every additional () after the first adds a multiplier of 1.1 to the specified weight, so since there's 2 additional () you get 1.3 * 1.1 * 1.1. Due to computer math here's a whole bunch of 0s and a number at the end but that's not really that important in this case.
 
  • Like
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
This is the output from just the weight assignment, so there might be functions that drops "empty" array elements later. My guess is that it keeps everything for "completeness" so that you could technically just merge the [x][0] array elements back together if needed. I honestly haven't looked that the whole processing code for it, it's just a idea of possibility that popped in as "something it wouldn't surprise me if a coder had done".
Also, there's such as thing as token padding where i believe both pos and neg token gets padded to equal length, i don't know why this gets done nor what it affects. Not that unlikely the padding is just spaces so depending on where in the parsing that happens so not stripping spaces could be to maintain something related to that. Again, just a random thought.


As for your test, it comes to 1.573. every additional () after the first adds a multiplier of 1.1 to the specified weight, so since there's 2 additional () you get 1.3 * 1.1 * 1.1. Due to computer math here's a whole bunch of 0s and a number at the end but that's not really that important in this case.
Thanks, I was hoping for the actual code intermediary results dump if you still had that snippet up.
 

me3

Member
Dec 31, 2016
316
708
Thanks, I was hoping for the actual code intermediary results dump if you still had that snippet up.
rather limited output since it's just one word
Python:
[['test', 1.5730000000000004]]
Can if interested, fairly uncomplicated. Just put the prompt in the print at the bottom and hit run, link should be valid for a month.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Synalon pointed out to me that the editor I showed in this post yesterday is something you might need to install. I thought it was included with controlnet. I apologize for the oversight. I have updated the post.

 
Last edited:

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
Picking up on an idea left for later in this post: link, namely taking a slender woman-only picture and outpainting it out.

The goal is to save on the machine cycles. Namely you render and save "just the girl" and while the outpaint is rendering you get enough time to decide if you want to go ahead with the outpaint: can cancel the outpaint at any moment.

Here's the workflow file for CUI:
You don't have permission to view the spoiler content. Log in or register now.
outpainting.png

Why? Cause the actual girl takes about 1/3rd of the time required to render the entire image. Kinda sucks rendering the whole thing just to find out something turned out bad. I.e. here I realized I don't want the purple square and cancelled the render:
You don't have permission to view the spoiler content. Log in or register now.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
In a day or so I'll be dropping a workflow where the outpaint option is disabled although present inside the workflow. You just don't enable it at the start. You can render as many smaller images as you want, then go back to the one you like, enable the outpaint and then commit machine cycles there.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
So this guy ( ) keeps experimenting with the ComfyUI workflows and he posted a method of making consistent face and an outfit and a pose:



Here is the link to his workflow that's meant to do all of that:
You don't have permission to view the spoiler content. Log in or register now.
I poped this workflow in and it turns out I am missing quite a good bit of nodes:
Untitled.png
On his page NR suggests a CUI plugin to automatically download the missing nodes - finally a motivation to try out some of those bells and whistles.

It will take me some time to get my CUI installation to where it can run that workflow, partially due to procrastination. Still, this looks very interesting. If this works as advertised, this might become a great addition or even alternative to LORAs.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
How does that old joke go about being rational? The NASA spent a few million dollars for their space pen, the Russians just used an old pencil.

Depending on your machine this workflow will very quickly generate a girl against a white background and then will spend close to 10 minutes outpainting that white background 250 pixels left and 250 pixels right. And then of course, there are nuances:
a_00945_.png
The image above is a CUI workflow.

The image below is a manual edit, where one layer was the AI-generated girl and another layer was ~white background, then both layers were blended:
_template.png
Now, don't let me do you a disservice thru ridiculing the CUI's outpaint workflow. The result was subpar because I am a moron; the workflow approach itself is very clearly the future. I am clowning somewhat, don't make a mistake of drawing the wrong conclusion here.
 

Fuchsschweif

Active Member
Sep 24, 2019
962
1,547
What are the best upscalers for very sharp and good results? I got some of the NMKD ones but they all suck so far. (Or it's my skill issue)
 
Last edited:

hkennereth

Member
Mar 3, 2019
228
740
What are the best upscalers for very sharp and good results? I got some of the NMKD ones but they all suck so far. (Or it's my skill issue)
I personally use two primarily: a good and reliable all-arounder (4x-Ultrasharp), and one better for high-precision and detailed upscaling but can be tricky in some cases (4x_NMKD-Siax_200k). I use those extensively and I stand by their quality. I also had issues with other NMKD upscalers like the Superscale-SP one, so it's not just you. In general I use Siax for everything unless it shows some artifacting, in which case I go back to Ultrasharp.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
I personally use two primarily: a good and reliable all-arounder (4x-Ultrasharp), and one better for high-precision and detailed upscaling but can be tricky in some cases (4x_NMKD-Siax_200k). I use those extensively and I stand by their quality. I also had issues with other NMKD upscalers like the Superscale-SP one, so it's not just you. In general I use Siax for everything unless it shows some artifacting, in which case I go back to Ultrasharp.
Would you post examples of what those look like on ~1200x2400 renders?
 

Jimwalrus

Active Member
Sep 15, 2021
885
3,269
What are the best upscalers for very sharp and good results? I got some of the NMKD ones but they all suck so far. (Or it's my skill issue)
I find ESRGAN_4x to be very forgiving.
Every time I've done side-by-sides with others it's as good or better. It may also be my lack of skill in getting good results with those other upscalers, but that kind of proves my point that it's very forgiving.
 
  • Like
Reactions: sharlotte

me3

Member
Dec 31, 2016
316
708
Something to remember with upscalers, they as well are made with certain things in mind. So you need to pick upscaler suited for the "type" of image you're applying it too.
 
  • Like
Reactions: Mr-Fox

Fuchsschweif

Active Member
Sep 24, 2019
962
1,547
This is what I generally get with ESRGAN_4x, it's still a bit "soft"and washed out. I'd like to have the images more crisp and sharp.

1697472773540.png

4x Ultrasharp:

1697473039352.png

Both at 20 sample steps + 40 hires steps.

Doesn't get sharper with 30/60:

1697473738355.png
 
Last edited:

Fuchsschweif

Active Member
Sep 24, 2019
962
1,547
With higher resolutions it gets definitely sharper. But since resolution isn't the only factor, is there a way to get sharper outputs at 1024x1024 already?

Here's upscaled with 2.5 times instead of 2 (from 512x512), especially the detailed hair sticks out:

00012-1352350894.png

And here's 3x upscaled (from 512x512)

00013-966429224.png
 
  • Red Heart
Reactions: sharlotte

me3

Member
Dec 31, 2016
316
708
You might be able to find something . At the very least you can get an idea of what exist and what works better with which type of things
 

hkennereth

Member
Mar 3, 2019
228
740
Would you post examples of what those look like on ~1200x2400 renders?
Sure. I don't really have a side-by-side example here at hand, and they do take a little to render on my mid-range machine, so I'll just show some old renders I have.

This first one was upscaled with Siax:
fullres_00005_.png

And this one with UltraSharp:
fullres_00011_.png

Without a side-by-side, I understand it would be difficult to see the difference, but they both give me great results.
 

Fuchsschweif

Active Member
Sep 24, 2019
962
1,547
With higher resolutions it gets definitely sharper. But since resolution isn't the only factor, is there a way to get sharper outputs at 1024x1024 already?

Here's upscaled with 2.5 times instead of 2 (from 512x512), especially the detailed hair sticks out:

View attachment 3010939

And here's 3x upscaled (from 512x512)

View attachment 3010940
Meanwhile, this is also with 3x upscaling and 40 steps, but the face highly lacks in details..

1697476165817.png
 
Last edited:

hkennereth

Member
Mar 3, 2019
228
740
With higher resolutions it gets definitely sharper. But since resolution isn't the only factor, is there a way to get sharper outputs at 1024x1024 already?
Well, it depends on what you define as "sharp". The problem is that the level of detail decreases fast the smaller that "thing" is within the image because of the way Stable Diffusion works. It's really made to create well defined shapes around the size of the original render scale (512x512 px), and anything smaller than around 64x64 px is basically just "random brushstrokes to fake detail", so as the subject becomes smaller within this frame, the more SD starts "guessing" what its shape should be.

Upscalling and re-rendering through img2img is a hack to get SD to make subjects take more than that low limit. What you can do however is use tricks like inpainting at full resolution for things like faces, which in ComfyUI one can more easily accomplish with the FaceDetailer plugin, to re-render just that part at a higher resolution, increasing detail.

For example, on the image I posted above, that's the result of two-upscale steps. I initially render a first version at ~512 px² (just make one of the axis bigger depending on whether I want a landscape or portrait picture; in this case it was 512x640 px I believe), the upscale that 2x so it becomes about ~1024 px², then send it to FaceDetailer to make the face more faithful to my subject, and then finally run another 2x upscale to about ~2048 px². This final result is what I posted above.

Below you can see the result of the first upscale and the one post FaceDetailer. Sorry I didn't save the original render, and I don't have the original checkpoint used anymore. I'd say they have decent amount of detail, but not enough for my standards, which is why I upscale them once more even when I post them on something like Instagram, where resolution-wise the one below would be enough.

Upscaled to ~1024 px:
midres_00002_.png

FaceDetailer:
midres_dd_00002_.png
 
  • Like
Reactions: Fuchsschweif

hkennereth

Member
Mar 3, 2019
228
740
Actually I found that I have the very first render of this image saved... but that's because I was using this crazy complicated flow where I would render a very low res image without any LoRAs with another checkpoint that would result in more interesting poses, then use that as source for ControlNet so I could make an image with my LoRA that didn't just make boring poses. So the first render for the image above is this one, that just gave me the pose I wanted:

ComfyUI_00082_.png
 
  • Like
Reactions: DD3DD