[Stable Diffusion] Prompt Sharing and Learning Thread

modine2021

Member
May 20, 2021
381
1,254
anyone with experience with these? they just aint happening for me. even using the exact settings from authors samples. my results are all only normal poses with panties and shirts intact.





my results no panty pull or shirt lift

00000-2090988572.jpg
 
  • Like
Reactions: Mark17

wol6636

New Member
Jan 29, 2023
1
0
don't know what happend. was enjoying myself but now nothing happened but this message even after setting back to default values:


NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Time taken: 15.92s
Torch active/reserved: 3242/3676 MiB, Sys VRAM: 5237/8192 MiB (63.93%)
 

rayminator

Engaged Member
Respected User
Sep 26, 2018
3,041
3,140
open webui-user.bat or launch.py

if launch.py right click open with text editor and find commandline_args = os.environ.get('COMMANDLINE_ARGS', "") and change it to commandline_args = os.environ.get('COMMANDLINE_ARGS', "--disable-nan-check ") or commandline_args = os.environ.get('COMMANDLINE_ARGS', "--no-half ")


if webui-user.bat right click and select show more options and click on edit and look for set COMMANDLINE_ARGS= and add --disable-nan-check or --no-half

some models don't work on some stable diffussions program
 

modine2021

Member
May 20, 2021
381
1,254
open webui-user.bat or launch.py

if launch.py right click open with text editor and find commandline_args = os.environ.get('COMMANDLINE_ARGS', "") and change it to commandline_args = os.environ.get('COMMANDLINE_ARGS', "--disable-nan-check ") or commandline_args = os.environ.get('COMMANDLINE_ARGS', "--no-half ")


if webui-user.bat right click and select show more options and click on edit and look for set COMMANDLINE_ARGS= and add --disable-nan-check or --no-half

some models don't work on some stable diffussions program
oh i have since did that after that reply :)....was hopin for an answer to my question above about panty pull
 

Jimwalrus

Active Member
Sep 15, 2021
931
3,423
anyone with experience with these? they just aint happening for me. even using the exact settings from authors samples. my results are all only normal poses with panties and shirts intact.





my results no panty pull or shirt lift

View attachment 2518558
Could you please post a PNG with the prompts etc instead of a jpg? Thanks.
It certainly seems weird that it's not picking up the LoRA - unless you're missing the trigger word(s) in the prompt.

EDIT: I was unaware that JPEGs can also carry the relevant generation metadata. I stand corrected!
 
Last edited:

Nano999

Member
Jun 4, 2022
155
69
I don't have the old pic anymore, the new one with the other model:

You don't have permission to view the spoiler content. Log in or register now.

This model gives blurry results at 0.2 and normal at 0.3
The other model blurry at 0.45 and normal at 0.5
So it depends on the model then
 

Nano999

Member
Jun 4, 2022
155
69
e.g. I'm changing denoising strength of the image above

30 steps + 10 steps for hi res.

For each 0.1 ... 0.5 ... x it will render 30 steps + 10 steps
Every 30 steps images will be the same, every 10 steps different
So I want to take the very first image of 30 steps and let SD use it as a base and generate hires.fix images without generating each time the same first 30 steps image to save the time



So it must generete one time 30 steps for a simple image and then do 10 steps of hires.fix, 10 steps, 10 steps for 0.1, 0.2, 0.3 denoise strength
but SD keep rendering 30 steps each time and it's the same image
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
I don't have the old pic anymore, the new one with the other model:

You don't have permission to view the spoiler content. Log in or register now.

This model gives blurry results at 0.2 and normal at 0.3
The other model blurry at 0.45 and normal at 0.5
So it depends on the model then
The point in having the meta data is for anyone who wish to help being able to see the settings etc so that we can better suggest other ones that may help your issue. I would use a different upscaler. They make a big difference for the image quality.
Try Lanczos if you only have the default ones. I would however recommend getting NMKD Superscale from this . There are other ones there also such as one focused on faces etc. I have not tried any other myself yet though.
The sample method is also something I highly recommend that you change, either to DPM++ 2M Karras or DPM++ SDE.
Try to use 2x the hires steps to the sample steps if you wish to keep the composition.
Example:
25 sample steps = 50 hires steps
I recommend 20-30 sample steps and 40-60 hires steps.
I believe that if you make these changes that you will see a big increase in the image quality.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
e.g. I'm changing denoising strength of the image above

30 steps + 10 steps for hi res.

For each 0.1 ... 0.5 ... x it will render 30 steps + 10 steps
Every 30 steps images will be the same, every 10 steps different
So I want to take the very first image of 30 steps and let SD use it as a base and generate hires.fix images without generating each time the same first 30 steps image to save the time



So it must generete one time 30 steps for a simple image and then do 10 steps of hires.fix, 10 steps, 10 steps for 0.1, 0.2, 0.3 denoise strength
but SD keep rendering 30 steps each time and it's the same image
I understand now. It's not possible in txt2img but in img2img you can use the upscaling function with plot script. First generate the image in txt2img and then send it to img2img tab and generate it there using the same prompt and seed in combination with plot script. Then I think that you have achieved what you are asking for. Though upscaling is not exactly the same as hires fix.
 
Last edited:

Nano999

Member
Jun 4, 2022
155
69
I understand now. It's not possible in txt2img but in img2img you can use the upscaling function with plot script. First generate the image in txt2img and then send it to img2img tab and generate it there using the same prompt and seed in combination with plot script. Then I think that you have achieved what you are asking for. Though upscaling is not exactly the same as hires fix.
Yeah, what a shame. This functiong is so needed in txt2img
 

modine2021

Member
May 20, 2021
381
1,254
Could you please post a PNG with the prompts etc instead of a jpg? Thanks.
It certainly seems weird that it's not picking up the LoRA - unless you're missing the trigger word(s) in the prompt.
oh thought that was a png. i usually edit in photoshop then save as a jpg. try these. im using exact settings as the model creator samples they used:

Panty Pull drop (suppose to be pulling panty down)
00000-2090988572.png


Shirtlift (suppose to be flashing breasts)

00001-920875561.png


Skirtlift (suppose to be a woman lifting her skirt)
00003-2434198864.png
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
Yeah, what a shame. This functiong is so needed in txt2img
If the goal is to get a better image I don't think that you need to worry too much about doing this comparison of denoising.
Keep it between 0.2-0.3. What will make the big difference is the changes I recommended to you in the other post.
Btw I forgot to say , use postprocessing, I prefer GPFGAN but you can try Codeformer and see wich you prefer. Sometimes using Restore Faces is good other times it's bad, you need to try it and see wich you prefer.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
oh thought that was a png. i usually edit in photoshop then save as a jpg. try these. im using exact settings as the model creator samples they used:

Panty Pull drop (suppose to be pulling panty down)
View attachment 2520770


Shirtlift (suppose to be flashing breasts)
View attachment 2520829


Skirtlift (suppose to be a woman lifting her skirt)
View attachment 2520910
I don't have that checkpoint or the hypernetwork so I can't do tests myself. I would suggest to try it without the hypernetwork and see if it helps. In my experience Hypernetworks and Textual Inversions have a tendency to take over and mess up other things, such as making Lora's ineffective.
 
  • Like
Reactions: modine2021

Davox

Well-Known Member
Jul 15, 2017
1,521
2,289
oh thought that was a png. i usually edit in photoshop then save as a jpg. try these. im using exact settings as the model creator samples they used:

Skirtlift (suppose to be a woman lifting her skirt)
View attachment 2520910
I use one called lora:skirtliftTheAstonishing_skirtliftv1:1

It absolutely loves just doing a torso shot and cutting off the head even if I adjust the negative prompt to account for it. Making the resolution taller helps.

Most of these seem to b e bottomless rather than skirt lift, so you might want to add something about a skirt in the prompt. I don't normally see this, so think it might be something to do with the lab coat prompt.

This was produced with your prompt and negative, but i left everythin esle as the default. the checkpoint is realistic vision v13

grid-0020.png

The one i use for shirt lift is lora:shirtliftALORAFor_shirtliftv1:1

grid-0026.png
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
How to access postprocessing tab? Is it an extension or?
In settings go to postprocessing tab and select both GFPGAN and Codeformer and apply settings of course.. They will both be visible in txt2img and img2img.
You have a visibility slider that you use to select the one you want or I guess you can do a mix of both, codeformer also have a weight slider. So far I prefer to use GFPGAN and I just set it to 1.
 
  • Like
Reactions: Nano999

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
I use one called lora:skirtliftTheAstonishing_skirtliftv1:1

It absolutely loves just doing a torso shot and cutting off the head even if I adjust the negative prompt to account for it. Making the resolution taller helps.

Most of these seem to b e bottomless rather than skirt lift, so you might want to add something about a skirt in the prompt. I don't normally see this, so think it might be something to do with the lab coat prompt.

This was produced with your prompt and negative, but i left everythin esle as the default. the checkpoint is realistic vision v13

View attachment 2520947

The one i use for shirt lift is lora:shirtliftALORAFor_shirtliftv1:1

View attachment 2520972
To avoid torso only shots or face out of view etc. Try to use "out of frame" "cropped" "face cropped" "face cut" etc in negative, you might need to weight them heavily. Example: (cropped:1.5).
Also use something in positive prompt such as "full body view". It's very effective to use a positive prompt in combination with a negative prompt for a specific result.
 
  • Like
Reactions: modine2021

Davox

Well-Known Member
Jul 15, 2017
1,521
2,289
To avoid torso only shots or face out of view etc. Try to use "out of frame" "cropped" "face cropped" "face cut" etc in negative, you might need to weight them heavily. Example: (cropped:1.5).
Also use something in positive prompt such as "full body view". It's very effective to use a positive prompt in combination with a negative prompt for a specific result.
I was doing both (except the weighting), they are in my usual prompts/negatives. they didn't make much of a difference with that lora so i cut them out to make my prompts match the guy who was asking for help.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
I was doing both (except the weighting), they are in my usual prompts/negatives. they didn't make much of a difference with that lora so i cut them out to make my prompts match the guy who was asking for help.
Try it with the weight in both positive and negative.

Example
Positive: (full body view:1.5)
Negative: (cropped:1.5), (face cut:1.5), (torso only view:1.5)

This is sledgehammer powerful. Though depending on how a Lora is trained this might not be enough.
A sure sign that a Lora is either over trained or just badly trained is that it's too dominant and difficult to scale it back.
Also keep in mind that if you set a Lora strength high it's going to take over more and it's tendencies is going to be more prominent such as only upper torso view etc. You should only use high enough strength that the Lora does what it supposed to do without taking over.