[Stable Diffusion] Prompt Sharing and Learning Thread

Nano999

Member
Jun 4, 2022
166
73
Have an issue with images generation

The reason:
modules.devices.NansException: A tensor with all NaNs was produced in VAE. This could be because there's not enough precision to represent the picture. Try adding --no-half-vae commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

The whole generation process stopping when facing this issue


Solution:
set COMMANDLINE_ARGS=--disable-nan-check

Now the image generation process continues even when this error appears and when that error "happens" the result images (errored images) will be just black solids.

Anyone faced this issue, any fix?
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Sometimes SD start to act strange and the best solution is simply to restart. Sometimes it even requires a pc reboot.
I don't remember the exact error but yes I have had black images, I have also had random color noise images.
And since you are going to restart anyway it's best to update everything, just in case.
 
  • Like
Reactions: Sepheyer

modine2021

Member
May 20, 2021
417
1,389
how to fix distortions? this looks terrible. especially the face. i just started using Stable Diffusion. i know there are extensions and what not.

00009-1149139641.png
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
and the hands and arms?
Number one, pick a good checkpoint (model) that is relevant for the type of image you are attempting.
I would recommend: , , .
Next is to make your prompt as good as possible with clear desription, properly weighted and sufficient negative prompt.
Weighting means how much emphasis is put on a tag and it's done like this: "(large breast:1.4)" .
It can also be used in order to decrease something: "(Diffused light:0.6)" .
Use negative prompts for things you don't want to see in your image. Such as (ugly), (disfigured), (deformed), (fused), (conjoined), (extra limbs), (missing limbs) etc. You can of course weight tags in the negative prompts as well.
It is very effective to use the positive prompt in unison with the negative prompt.
Positive: (large breast:1.4)
Negative: (small breast:1.4)
Don't use too "heavy" weighting though, it may "break" the prompt and make the result unpredictable and more likely to create monstrosities.
You can try to use "post processing". In settings/ Postprocessing, enable both GFPGAN and Codeformer.
They will show up in tx2img and img2img. You will then adjust "visibility" to activate them. Try either on it's own, and/or with "Restore Faces", or try them both together and/or with "Restore Faces". You can choose wich type of Restore Faces you are using in setting/Face Restoration. Try different amount of steps. Also try different amounts of CFG Scale. Try different samplers.
Euler a is fast and a good start but you can get more out of SD with others. I personally use "DPM++ SDE Karras" most of the time and sometimes "DPM++ 2m Karras". Heun is also very good but try them all.
If you want to compare settings in a way that is easy to overview. Use Plot script.
It will generate a table or grid of images.
You can have samplers in x-axis and amount of steps for each sampler in y-axis as an example.
xyz_grid-0000-3142985125.png

If you have the GPU for it you can use Hires Fix. It is the largest image quality enhancement that is available at this moment.
Use as high multiplier ("upscale by") your GPU will allow. You will find the limit when you get cuda memory error.
Use Hires steps between 10-30, though 15-20 is usually good. Use 0.4-0.6 Denoising strength. Use a good upscaler.
Different upscalers will give slightly different results. The main difference is sharpness, light and color.
Lanczos or SwinIR_4x is a good start.
Miscellaneous "hidden" things that can increase the image quality:
Get the new VAE. You can read about it in this post.
In settings/Stable Diffusion " Enable quantization in K samplers for sharper and cleaner results ".
If you have done everything up to this point but still have issues you can try "clip skip" in settings/Stable Diffusion.
2 is usually enough but sometimes 3 is the ticket.

Good Luck and remember to have fun.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
This is a working theory. Say you use this approach link but you starve your latent background of any meaning by rendering it (black background:1.0).

Then the final image takes on this "dramaticy" feel:
View attachment 2479413
You don't have permission to view the spoiler content. Log in or register now.
I have not messed with dark backdrops yet. Good to know. (y)
 
  • Like
Reactions: Sepheyer

miaouxtoo

Newbie
Mar 11, 2023
46
132
Euler a is fast and a good start but you can get more out of SD with others. I personally use "DPM++ SDE Karras" most of the time and sometimes "DPM++ 2m Karras". Heun is also very good but try them all.
If you want to compare settings in a way that is easy to overview. Use Plot script.
It will generate a table or grid of images.
You can have samplers in x-axis and amount of steps for each sampler in y-axis as an example.


If you have the GPU for it you can use Hires Fix. It is the largest image quality enhancement that is available at this moment.
Use as high multiplier ("upscale by") your GPU will allow. You will find the limit when you get cuda memory error.
Use Hires steps between 10-30, though 15-20 is usually good. Use 0.4-0.6 Denoising strength. Use a good upscaler.
Different upscalers will give slightly different results. The main difference is sharpness, light and color.
I think usually 15 - 25 steps for the initial generation is often enough. You get diminishing returns over that and they all converge to roughly the same thing - but I guess it might depend on the model as well. Better to do batch count rather than batch size too if you're doing multiple generations.
 
Last edited:
  • Like
Reactions: Mr-Fox

GranTurboAutismo

Active Member
Aug 4, 2019
638
1,078
Hello, does anyone know how to make it do bestiality stuff? Something similar to Candy_42 works ( ). Can't find any model/lora/etc that handles it well.
We both know what that means. Time to train your own model with that tag and a lora on the artist.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Hello, does anyone know how to make it do bestiality stuff? Something similar to Candy_42 works ( ). Can't find any model/lora/etc that handles it well.
Just an observation. Bestiality is not something normal, it is a taboo for a reason. Why would you even be curious?
All one needs is a great pair of boobas and a badonkas.

" Zoophilia (Bestiality) is a form of sexual perversion (paraphilia), which involves sexual fantasies and acts with animals. Paraphilias are included under psychiatric disorders and this terminology was used for the first time in DSM III (First, 2014). "
 
Last edited:

Schlongborn

Member
May 4, 2019
432
1,533
I think usually 15 - 25 steps for the initial generation is often enough. You get diminishing returns over that and they all converge to roughly the same thing - but I guess it might depend on the model as well. Better to do batch count rather than batch size too if you're doing multiple generations.
You are right about that, there is some information available to get a rough idea how each of the samplers behaves vs sampling steps: (bunch of links at the end of that page)

Some x/y plots I found informative:
sampler-steps.jpg
sampler-steps2.jpg
 
  • Like
Reactions: Sepheyer and Mr-Fox

Schlongborn

Member
May 4, 2019
432
1,533
Also, exciting things are happening every day, there is now a model that can do video which you could try to run locally if you want:

And even though I guess most people here care about visuals, some of you probably also like some narrative to go with it. And it looks like we might soon be able to run something like ChatGPT locally.

Here is the webui equivalent for LLAMA:

And here is a subreddit about local LLAMA (and all other LLMs): with instructions in the sticky topic:
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Quick tip. I just learned that we can "edit" the ui and add quick settings in the upper left window besides the checkpoint selector.
Add the command in settings/ User Interface/ Quicksettings list

Example:
add sd_hypernetwork and CLIP_stop_at_last_layers to the Quicksettings list, save, and restart the webui
1679242605554.png
1679242691799.png
sd_vae, sd_hypernetwork, sd_lora
1679242916860.png
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,767
Deep tan..
Random images from a large batch.

View attachment 2478255 View attachment 2478256
I was trying to see if I can port the prompt into ComfyUI, no dice so far.

Looks like CUI is more sensitive to weights, and anything like 1.6 sends it into a tail spin.

As an aside, are the pokies no for the lady? I saw them mentioned, but didn't look like they were on the same way the cameltoe was on.

If I get a nice result I'll post it, but it looks like without adjusting the weights down, I won't get there at all :(
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,767

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I was trying to see if I can port the prompt into ComfyUI, no dice so far.

Looks like CUI is more sensitive to weights, and anything like 1.6 sends it into a tail spin.

As an aside, are the pokies no for the lady? I saw them mentioned, but didn't look like they were on the same way the cameltoe was on.

If I get a nice result I'll post it, but it looks like without adjusting the weights down, I won't get there at all :(
Yes she is supposed to have pokies.
For some reason the pokies disappears when using Hires fix. I have tried a lot of things but not been successful yet.
Also I find lora's and other additions like embedings or hypernetworks to be problematic. They have a tendency to override the prompt too much. I have used very heavy weights on this prompt and they can all be scaled back a bit.
 
  • Red Heart
Reactions: Sepheyer