[Stable Diffusion] Prompt Sharing and Learning Thread

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
I'm away from my PC atm, so can't test anything, but please note "0.02" is NOT the same as 0.2!
It seems to be an error you make a lot. I don't think it's causing this, but it does affect things.
I know, I set it to such a low number so that I could expect 99% of the same result with giving SD tiny room to change minor details if necessary. I haven't played around enough with the values yet to see how much "room" it needs, but as you said, with 0.02 we could've expected to get almost the exact same picture.
 
  • Like
Reactions: Jimwalrus

Jimwalrus

Well-Known Member
Sep 15, 2021
1,045
3,994
Without putting it forward as a fix for this issue, I'd recommend turning CLIP Skip off. It's really best to only use it with LoRAs where its use is suggested, or as a last resort if you just can't get the damn thing to create what you want.
 
  • Like
Reactions: Fuchsschweif

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
Without putting it forward as a fix for this issue, I'd recommend turning CLIP Skip off. It's really best to only use it with LoRAs where its use is suggested, or as a last resort if you just can't get the damn thing to create what you want.
Thanks, yeah I changed that in the beginning when I installed the first Lora and it was recommended. Set it back to 1 now but yeah, unfortunately it's not the fix.
 
  • Like
Reactions: Jimwalrus

Jimwalrus

Well-Known Member
Sep 15, 2021
1,045
3,994
Thanks, yeah I changed that in the beginning when I installed the first Lora and it was recommended. Set it back to 1 now but yeah, unfortunately it's not the fix.
AFAIK 'Off' is Zero not 1 for Clip Skip.
I don't use it much, so can't say for sure.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,767
I know, I set it to such a low number so that I could expect 99% of the same result with giving SD tiny room to change minor details if necessary. I haven't played around enough with the values yet to see how much "room" it needs, but as you said, with 0.02 we could've expected to get almost the exact same picture.
I think for upscaling the denoise should be closer to 50% rather than 2%. Try it in your setup, you might be surprised that the "change minor details" in practice starts being meaningfully affected after 25%-30% rather than 2%. I know it was the case in my setups and I was surprised realizing that the thing is rather progressive / exponential in nature.
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
AFAIK 'Off' is Zero not 1 for Clip Skip.
I don't use it much, so can't say for sure.
I think 1 is off, you can't set it to off completely. At 1 the tooltip says "ignores no layers".
I can check that back later when I use SD again.

I think for upscaling the denoise should be closer to 50% rather than 2%. Try it in your setup, you might be surprised that the "change minor details" in practice starts being meaningfully affected after 25%-30% rather than 2%.
I'll play around with that once I can get SD to re-create my seeds, right now I get completely different results..
 
  • Like
Reactions: Jimwalrus

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Workflow Example

Prompt:

a beautiful woman standing, beautiful eyes, detailed

neg: fused hands, blurry

checkpoint: revAnimated
-----------------------------------------------------------
(no loras or embedings etc, nada.)

After a few generated images:

00047-3287071526.png

Now press the green recycle button. Activate hiresfix and select ultrasharp, set 0.3 denoising.
The first image is blurry but looks the same otherwise. so I ad "blurry" to negative and generate again with hiresfix.
Next image is changing completely to a different one. This is because I didn't set the hiresfix steps. In order to keep the composition use double the amount of steps you use for sample steps. Meaning 20 sample steps times 2 = 40.
Now I'm generating again with 40 hires steps.

00048-3287071526.png

Next Project.

a beautiful Asian woman standing, beautiful eyes, detailed.
wearing a cropped top and short shorts, hand on hip.

neg:fused hands, blurry

First image, the eyes are a bit wonky and the fingers are bad.
I re-use the same seed now by pressing the green recycle button, before I used random (-1).
I add symmetry to pos and asymmetrical to neg.
It can be good to use a static seed when adjusting and adding things to the prompt to see the effect. Though sometimes no changes happen until you use a new seed.
Ok now I have a decent image.
00050-1149865719.png

I activate the hiresfix again and use the same settings, meaning 40hires steps and 0.3 denoising with ultrasharp upscaler.
00051-1149865719.png
It's not 100 percent perfect but it's decent enough for this "little" tutorial...

Next Project

we ad a background and being more specific with the prompt, we also start to organize it better.

a beautiful Asian woman standing, beautiful eyes, smile.
wearing a lace thong, topless, hand on hip.
busty, big nipple, curvy,
bedroom with curtains.
detailed, symmetry.

neg:asymmetrical, fused hands, blurry

First image is pretty good but have things that needs to be improved.
I add more tags to negative and generate new images after every adjustment. You can add several tags in one go if it's for the same problem such as hands or fingers etc. Sometimes I use a static seed when I adjust the prompt to see the effects but if nothing changes I try a new seed.
I got a pretty nice image but the hands are not good even after added tags and several images so I try the extension named After Detailer with the hand model and a negative prompt for the hands. Now the image is better. Time to activate hiresfix.
It's best to be methodical and add or adjust one thing at a time and introduce only extensions or additions such as Loras etc for a specific intended result and as needed. Starting out with adding a lot of things without knowing what will happen is the same as throwing everything at the wall and hoping it will stick, it's guaranteed to fail.

After I generated the image with hiresfix the hands were not good so I go back to normal generation in other words hunting for a better seed. I get a better image with a better success potential because the fingers are more hidden.
Again I use hiresfix.
00068-1982251251.png
Pretty nice image but not perfect. The hands can be fixed with inpaint but I want to finish this tutorial today...
So I only give it a quick go. Press send to img2img inpaint tab (color palette button).
Remove everything in pos prompt and add "detailed hand, detailed fingers" only. Mask one hand. set the denoise to 0.2 and masked only.
Very important, don't use static seed, set it to -1. Press generate. It wasn't enough so increase the denoising to 0.3 .
Still wasn't enough. Increase the denoising to 0.4 and generate again. Slightly better but can be improved more, I increase the steps as a test. There we go. Decent enough..

00012-2796811266.png

Conclusion

prompt:

a beautiful Asian woman standing, beautiful eyes, smile.
wearing a lace thong, topless, hand on hip.
busty, big nipple, curvy.
bedroom with curtains.
detailed, symmetry.

neg:
ugly, asymmetrical, bad anatomy, poorly drawn.
deformed iris, deformed pupil, cross eyed, lazy eyed.
fused hands, deformed hand, fused fingers, deformed fingers, (extra fingers).
blurry image.
cape.

Used ADetailer with hand model with inpainting denoise 0.3 and neg prompt:
bad hands, poorly drawn hands, deformed hand, deformed fingers, extra fingers, fused fingers.

hiresfix, upscaler 4x_Ultrasharp 40 hires steps deoising strength 0.3 multiplier:2
---------------------------------------------------------------------------------------------------
Fixed hands with inpainting

inpaint prompt "detailed hand, detailed fingers"

only masked padding, pixels 48
inpaint area, masked only
sampling steps 26
denoising strength 0.4
seed: -1

You can see everything else in PNG Info tab with each image.
I hope this was helpful and I didn't miss something.
 
Last edited:

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,767
Workflow Example

Prompt:

a beautiful woman standing, beautiful eyes, detailed

neg: fused hands, blurry

checkpoint: revAnimated
-----------------------------------------------------------
(no loras or embedings etc, nada.)

After a few generated images:

View attachment 3002628

Now press the green recycle button. Activate hiresfix and select ultrasharp, set 0.3 denoising.
The first image is blurry but looks the same otherwise. so I ad "blurry" to negative and generate again with hiresfix.
Next image is changing completely to a different one. This is because I didn't set the hiresfix steps. In order to keep the composition use double the amount of steps you use for sample steps. Meaning 20 sample steps times 2 = 40.
Now I'm generating again with 40 hires steps.

View attachment 3002631

Next Project.

a beautiful Asian woman standing, beautiful eyes, detailed.
wearing a cropped top and short shorts, hand on hip.

neg:fused hands, blurry

First image, the eyes are a bit wonky and the fingers are bad.
I re-use the same seed now by pressing the green recycle button, before I used random (-1).
I add symmetry to pos and asymmetrical to neg.
It can be good to use a static seed when adjusting and adding things to the prompt to see the effect. Though sometimes no changes happen until you use a new seed.
Ok now I have a decent image.
View attachment 3002645

I activate the hiresfix again and use the same settings, meaning 40hires steps and 0.3 denoising with ultrasharp upscaler.
View attachment 3002654
It's not 100 percent perfect but it's decent enough for this "little" tutorial...

Next Project

we ad a background and being more specific with the prompt, we also start to organize it better.

a beautiful Asian woman standing, beautiful eyes, smile.
wearing a lace thong, topless, hand on hip.
busty, big nipple, curvy,
bedroom with curtains.
detailed, symmetry.

neg:asymmetrical, fused hands, blurry

First image is pretty good but have things that needs to be improved.
I add more tags to negative and generate new images after every adjustment. You can add several tags in one go if it's for the same problem such as hands or fingers etc. Sometimes I use a static seed when I adjust the prompt to see the effects but if nothing changes I try a new seed.
I got a pretty nice image but the hands are not good even after added tags and several images so I try the extension named After Detailer with the hand model and a negative prompt for the hands. Now the image is better. Time to activate hiresfix.
It's best to be methodical and add or adjust one thing at a time and introduce only extensions or additions such as Loras etc for a specific intended result and as needed. Starting out with adding a lot of things without knowing what will happen is the same as throwing everything at the wall and hoping it will stick, it's guaranteed to fail.

After I generated the image with hiresfix the hands were not good so I go back to normal generation in other words hunting for a better seed. I get a better image with a better success potential because the fingers are more hidden.
Again I use hiresfix.
View attachment 3002697
Pretty nice image but not perfect. The hands can be fixed with inpaint but I want to finish this tutorial today...
So I only give it a quick go. Press send to img2img inpaint tab (color palette button).
Remove everything in pos prompt and add "detailed hand, detailed fingers" only. Mask one hand. set the denoise to 0.2 and masked only.
Very important, don't use static seed, set it to -1. Press generate. It wasn't enough so increase the denoising to 0.3 .
Still wasn't enough. Increase the denoising to 0.4 and generate again. Slightly better but can be improved more, I increase the steps as a test. There we go. Decent enough..

View attachment 3002762

Conclusion

prompt:

a beautiful Asian woman standing, beautiful eyes, smile.
wearing a lace thong, topless, hand on hip.
busty, big nipple, curvy.
bedroom with curtains.
detailed, symmetry.

neg:
ugly, asymmetrical, bad anatomy, poorly drawn.
deformed iris, deformed pupil, cross eyed, lazy eyed.
fused hands, deformed hand, fused fingers, deformed fingers, (extra fingers).
blurry image.
cape.

Used ADetailer with hand model with inpainting denoise 0.3 and neg prompt:
bad hands, poorly drawn hands, deformed hand, deformed fingers, extra fingers, fused fingers.

hiresfix, upscaler 4x_Ultrasharp 40 hires steps deoising strength 0.3 multiplier:2
---------------------------------------------------------------------------------------------------
Fixed hands with inpainting

inpaint prompt "detailed hand, detailed fingers"

only masked padding, pixels 48
inpaint area, masked only
sampling steps 26
denoising strength 0.4
seed: -1

You can see everything else in PNG Info tab with each image.
I hope this was helpful and I didn't miss something.
You are in that sushi mood, lol!
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
I activate the hiresfix again and use the same settings, meaning 40hires steps and 0.3 denoising with ultrasharp upscaler.
Ok, let's see what I get out of this. I take my picture from above that I like (this one) , extract the png info and send it to text2image:

I add 4x ultrasharp in the highres upscaler menu, set hires steps to 40, denoiser to 0.3:

1697210355192.png

Again, a coooompletely different result. (And the quality is also pretty blurry.)

1697210391881.png
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
(And the quality is also pretty blurry.)
Re-running the exact same thing with increased "Upscale by" slider from 1 to 2 (I think the previous one was blurry because 1 meant = same resolution?), I got this:

1697210961492.png


So already way better in terms of sharpness, although I am not 100% satisfied with the result.

But funny that it did now recreate 1:1 the same picture.

Although the seed is still from the previous picture.

Why does SD not recreate from the seed, I don't get it :WaitWhat:
 

Jimwalrus

Well-Known Member
Sep 15, 2021
1,045
3,994
Re-running the exact same thing with increased "Upscale by" slider from 1 to 2 (I think the previous one was blurry because 1 meant = same resolution?), I got this:

View attachment 3002841


So already way better in terms of sharpness, although I am not 100% satisfied with the result.

But funny that it did now recreate 1:1 the same picture.

Although the seed is still from the previous picture.

Why does SD not recreate from the seed, I don't get it :WaitWhat:
Double check the prompts, even removing a space or de-pluralising a word (and especially fixing a typo) can completely change the generated image.
If you've got X-Tensors enabled that can make an image vary a tiny amount between otherwise identical parameters, but not this much.

Anyway, now keep that seed and see what more/less gen steps, more/less hires steps, different denoising strengths, different Samplers, different Upscalers etc do.
You can automate this using Scripts>X/Y/Z Plotter - see the tutorial linked on pg1 of this thread - and get some side-by-side comparisons.

It will help you find your style too.
 
  • Like
Reactions: Mr-Fox and Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
As the eminent Jim said. Nothing I have said is with intention of being condescending, only to help. You are new here and I don't know what your skill is or knowledge. I keep also other people in mind when I post tips or any tutorials or examples.
Please don't missunderstand our intentions.

I'm testing your matrix image. Pretty cool concept. It had variation seed in extra activated. This will ofc give variations..
My first image didn't match yours so I deactivated the variation seed. Now it's very similair to yours.

00071-2549670335.png

Clipskip 2 is not a problem. It's fine to use if it's needed. Try first without and then with clipskip 2 to see the effect.
It can fix things but it can also make things worse it depends on what you are generating and what you are generating with such as checkpoint and or loras etc. If they have been trained with clipskip 2 then it's better to use this ofc and the creator often say this on civit if this is the case.

Here's without clipskip 2
00074-2549670335.png

For hiresfix, the upscaler has a big effect on the final result so try others also. I suggested Ultrasharp because of the images you had posted, Anime oriented. Ultrasharp gives soft edges and smoothness. I'm very fond of NMKD Superscale since I mostly create images with photo realism. It gives crisp edges and fine details. I also use NMKD Face a lot.



With Ultrasharp
00075-2549670335.png

Now I saw that you had set the multiplier to 1, this means that you did not upscale. Set it over 1 to upscale. I would recommend at least 1.5 but I always use 2.
For demonstration only this is NMKD Superscale with multiplier 1.5 .
00077-2549670335.png
Also don't use so high cfg, unless it is for a very specific purpose. Too high cfg will make the image "burnt" or "overcooked".
I have set it to 8. I also increased the amount of steps. Each step gives sd a chance to refine the image more. Higher number is better to a point. You get diminishing return over 40.
A very high amount of steps can cause loss of details, such as nipples for example.
 
Last edited:

Jimwalrus

Well-Known Member
Sep 15, 2021
1,045
3,994
There we go, the Variation seed in the Extras options is the smoking gun for not getting consistent results with a given seed, especially if it's set to random (-1)

It's one of those options that's easy to set and forget it's still on.
 
Last edited:
  • Like
Reactions: Mr-Fox

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
There we go, the Variation seed in the Extras options is the smoking gun for not getting consistent results with a given seed, especially if it's set to random (-1)
I'm testing your matrix image. Pretty cool concept. It had variation seed in extra activated. This will ofc give variations..
My first image didn't match yours so I deactivated the variation seed. Now it's very similair to yours.
I had extras thicked off when I tried to re-create from the seed, as you can see in my shared settings here.

The extras thing is probably baked into the PNG because that's how I got there. However, when trying to recreate from the seed, I thicked it off and only used the seed, so this isn't causing the issue.


Double check the prompts, even removing a space or de-pluralising a word (and especially fixing a typo) can completely change the generated image.
I didn't touch the prompts at all either :/

Anyway, now keep that seed and see what more/less gen steps, more/less hires steps, different denoising strengths, different Samplers, different Upscalers etc do.
But I want to re-create the picture from my original seed! SD created a completely new image and then re-created new generations based on that one instead of the OG seed used in the first place - I don't get it.


For hiresfix, the upscaler has a big effect on the final result so try others also. I suggested Ultrasharp because of the images you had posted, Anime oriented. Ultrasharp gives soft edges and smoothness. I'm very fond of NMKD Superscale since I mostly create images with photo realism. It gives crisp edges and fine details. I also use NMKD Face a lot.
Where do I drop them?
 
  • Like
Reactions: Mr-Fox

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
As the eminent Jim said. Nothing I have said is with intention of being condescending, only to help. You are new here and I don't know what your skill is or knowledge. I keep also other people in mind when I post tips or any tutorials or examples.
Please don't missunderstand our intentions.
No don't worry, I just wanted to explain where I'm coming from so that you guys understand where I am at with understanding prompt engineering.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I had extras thicked off when I tried to re-create from the seed, as you can see in my shared settings here.

The extras thing is probably baked into the PNG because that's how I got there. However, when trying to recreate from the seed, I thicked it off and only used the seed, so this isn't causing the issue.




I didn't touch the prompts at all either :/



But I want to re-create the picture from my original seed! SD created a completely new image and then re-created new generations based on that one instead of the OG seed used in the first place - I don't get it.




Where do I drop them?
The extra was selected when I loaded your image from PNG Info. This is why it gave variations. Make sure it's not active.
After I deselected it, everything has been fine. 822547809 is the seed it had and the variations strength 0.2 .
If you load an image from PNG Info you will get all settings, including some that you might not want.
Normally when you set the seed, as long as you don't change the resolution or something major the image should not change drastically.
 
  • Like
Reactions: Jimwalrus

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,767
Open Pose ControlNet - Did You Know?

So, ComfyUI has a feature where it can cycle thru the pose files either randomly or up/down the folder content. Done via primitives, you can grab the setup from here:

a_00550_.png
You don't have permission to view the spoiler content. Log in or register now.
So what?

The non-ControlNet posing is hit and miss, and so are body proportions, etc. You can address this via ControlNet's Open Pose but you have to (more like had to) pick the file manually which went against generating poses randomly. Now you can get CUI to just go thru all the pose files which can give you more variations than whatever the actual model comes with.