[Stable Diffusion] Prompt Sharing and Learning Thread

namhoang909

Newbie
Apr 22, 2017
89
48
What argument are you using for stable diffusion(automatic1111)?
I saw someone recommend this " --xformers --precision full --no-half --no-half-vae " I have only use 'xformers'
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
What argument are you using for stable diffusion(automatic1111)?
I saw someone recommend this " --xformers --precision full --no-half --no-half-vae " I have only use 'xformers'
I would recommend to only use what you personally need, don't use what someone else has. Only use the arguments that you need for your specific setup and your usage of stable diffusion. --xformers and potentially --api if you will use the plugin for photoshop. Only add arguments when you try to resolve an issue and then remove it if it's solved by the devs in an update.
 

SDAI-futa

Newbie
Dec 16, 2023
29
31
View attachment 2481998

Stable Diffusion is an AI tool that lets one generate images using either text prompts or other images. It uses datasets to generate new and original visual content. Stable Diffusion users call these datasets models. Using the models requires skill and knowledge; there is an analogy with professional photography - mere button-pressing will lead to unimpressive results. Thus, similarly to photography, one needs to learn to interact with the tool that interacts with the environment. Arguably, then art is created.

Stable Diffusion's actual image generation takes a chunk of time; we all can save each other time, help each other and speed up the learning process if we share promps and the images they generated. Thus, this thread is primarily a prompt/nrompt (negative prompt) sharing thread with images and grids being secondary to prompts.

Please share your prompts/nrompts, images and grids. Do not share just the images as there are better threads for this.

Rules:
  1. Stable Diffusion only. If you would like to use other tools, please create new threads for the respective tools.
  2. Images without prompts are absolute taboo and will be reported as breaking the rules. This is not AI-art sharing thread. The AI-sharing thead is here: link. A qualifying post must have:
    1. model name with a link to get it.
    2. prompt/nrompt. If the nrompt (negative prompt) is empty, do expressly state so. Please use spoilers for the promts.
    3. a grid of images attached, ideally 6x5 so readers can get a feel what kind of images that prompt will generate.
    4. actual image or images. Please use spoilers for all but one image. Images should be resized to fit the page. Any image requiring scrolling might end being reported on. Please avoid situations where the viewers need to scroll across one image
    5. Points 2.1, 2.2, 2.3, 2.4 can be ignored if your image contains meta for the prompts, seed, model name, etc. These are mostly *.png files. A jpeg file doesn't contain prompt meta and hence an automatic violation of the requirements - subject to administrative action.
  3. Tutorials, ideas, suggestions are great. Discussion is OK.
  4. Trolling or garbage quality will be removed. Nudity is great, weird and freaky content is prooobably not. Cringe and gross content are definitely not.
  5. If sharing someone else's work always credit the creator if known. Else: "Creator: unclaimed". The only exception to the rule: the image has the creator's watermark.
  6. If you are the creator of an unclaimed image, you have to post a link with your original work in order to claim the credit.
  7. This is not an art thread, it is for learning. For sharing your AI art go here
Guides and Tutorials:

Installing Stable Diffusion
Stable Diffusion is a "backend", to simplify. The front ends to Stable Diffusion are:

WebUi by AUTOMATIC1111
Install this - this is what 100% folks need and want.​
The release page is here:
How to install (if you never used "git clone") here:
ComfyUI by comfyanonymous (hat tip to Elefy for calling this one out)​
Brand new and super-promising. Pipeline-based workflow. Doesn't have all the features that WebUI has. Install it only after you spend a few month in WebUI.​
The release page is here:

Good practices - if you are just starting!
  • Enable setting where prompts are attached to your PNG files automatically. This is in Settings: " Save text information about generation parameters as chunks to png files = on". Then you can open any of your PNG files with a text editor and find the settings that got you that image. Enable this right away.
  • Add "git pull" at the very beginning of your "webui-user.bat". If you disagree and think this is a bad practice because version control, then this advice is not for you, you are fine.
  • Leverage the automatic image import tool: link
  • Save your grids to the image folders instead of a separate folder. Much easier to organize things this way. If you disagree and have a different workflow, then this advice is not for you, you are fine.
  • Always have "Restore faces = on". Here is why: link
  • Learn to use "Script -> XYZ plot" as early as possible. This will save you massive amount of time early on. Here is what it means and how to: link
  • Check out the optimization flags that you can run the Stable Diffusion with: . You might want to have the [--xformers] on and a few others.
  • Checkout the prompt book: link
Guides
linkHow to install and use LORA
(i.e. zero in on cameltoe, poking nipples, etc.)
by Mr-Fox
linkGlossaryby Jimwalrus
linkText controlby Mr-Fox
linkPost-processingby Mr-Fox
linkUpscalingby Mr-Fox
linkIllegal Dark Magicby Mr-Fox
linkTraining LORA

Troubleshooting: post.
Note: 6GB 1660 cards train LORA in 10 hours or longer
by Schlongborn
Training LORA - a much longer, much more detailed postby Mr-Fox
linkTraining LORA - a good setting for Kohya to start withby Mr-Fox
linkPipeline-based workflow and its powerby Sepheyer
linkComfy UI Face-fix workaroundby Sepheyer
linkComfy UI Hi-Res fix workflowby Sepheyer
linkWidescreen Rendersby Dagg0th
linkSDXLby Sepheyer

Models
How to use models:
A quick overview how models compare using the same seed, prompt and settings: link and link






Embeddings
How to use embeddings:
link

I just posted a few simple tips to running SD on Linux. Thanks for putting up this thread I'm going to follow and learn more, share what I know.
 

SDAI-futa

Newbie
Dec 16, 2023
29
31
Installing Stable Diffusion
Stable Diffusion is a "backend", to simplify. The front ends to Stable Diffusion are:
To include a couple of the things I mentioned if you install SD on a Linux Distribution, from my experience. I run SD on a recent Linux Mint distribution.

You want to make sure you're using the propitiatory graphic drivers (usually on Linux there's an open source alternative by default). In Mint, this is simple to do through the Driver Manager, I am not sure how you'd switch in other Linux distributions but look for it.

  1. To prevent freezes from memory leaks (this was pretty bad when I started out, froze the entire system), install tcmalloc:
    1. sudo apt install libgoogle-perftools-dev (the dev indicates you might need source code repositories if you don't have those turned on)
      2. export LD_PRELOAD=libtcmalloc.so in the SD folder on your system

      source:
  2. Run SD with --medvram flag. Not a big hit to performance, and prevents vram crashes from higher rez or more realistic checkpoints
  3. Likewise, run with --xformers flag. You will see the xformers error stops showing up. This speeds up the whole proccess and further helps with memory crashes (for me, I was able to get over 600x800 rez on realistic models finally). On more recent Linux distributions, xformers are built in, we just need to tell SD to actually use them.

Obviously the above worked for me, on my distribution, on my desktop with my hardware. The above are generic enough that I believe it's just general good practice for anyone running on Linux, but let me know if you think otherwise.
 
Last edited:

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,776
To include a couple of the things I mentioned if you install SD on a Linux Distribution, from my experience. I run SD on a recent Linux Mint distribution.

You want to make sure you're using the propitiatory graphic drivers (usually on Linux there's an open source alternative by default). In Mint, this is simple to do through the Driver Manager, I am not sure how you'd switch in other Linux distributions but look for it.

  1. To prevent freezes from memory leaks (this was pretty bad when I started out, froze the entire system), install tcmalloc:
    1. sudo apt install libgoogle-perftools-dev (the dev indicates you might need source code repositories if you don't have those turned on)
      2. export LD_PRELOAD=libtcmalloc.so in the SD folder on your system

      source:
  2. Run SD with --medvram flag. Not a big hit to performance, and prevents vram crashes from higher rez or more realistic checkpoints
  3. Likewise, run with --xformers flag. You will see the xformers error stops showing up. This speeds up the whole proccess and further helps with memory crashes (for me, I was able to get over 600x800 rez on realistic models finally). On more recent Linux distributions, xformers are built in, we just need to tell SD to actually use them.

Obviously the above worked for me, on my distribution, on my desktop with my hardware. The above are generic enough that I believe it's just general good practice for anyone running on Linux, but let me know if you think otherwise.
Sweet! I added your post to the original post's guide section.
 

felldude

Active Member
Aug 26, 2017
572
1,701
I know it's not Stable Diffusion, but has anyone messed around with Krea AI? The realtime-ish generation is cool.
That might be the training.
They claim to have trained from scratch. (Still likely off the AI organized set )
Or maybe even starting older like resnet-50

If you want a fun project, you can do an image to image search on the closest original image in the 600TB image set that SD 1.5 was trained on here


Guess how many times these will come up a source of training for 1girl from x country

Also I manged to find my watermark in the set (From over 10 years ago...ehh 15 years)
Validation from a 60 million image set...lol

Felldude.jpg
 
Last edited:
  • Like
Reactions: devilkkw and Mr-Fox

namhoang909

Newbie
Apr 22, 2017
89
48
This is my current ui, a couple questions
where is the "Restore face" setting?
while I have indicate in both positive + negative it still generated a nude picture, & sometimes other people appear, how to fix that? thank in advance. 1703044857393.png
You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.
 
  • Like
Reactions: Sepheyer

sharlotte

Member
Jan 10, 2019
321
1,723
I'm not sure whether the restore face is there any longer. A very good tool for face is adetailer in Automatic1111. Just look for adetailer in the 'extensions' tab to install. There are loads of videos about it - this one is for the install for instance ( ).
 

namhoang909

Newbie
Apr 22, 2017
89
48
I'm not sure whether the restore face is there any longer. A very good tool for face is adetailer in Automatic1111. Just look for adetailer in the 'extensions' tab to install. There are loads of videos about it - this one is for the install for instance ( ).
I think I've found the face restore, long story: I tried XYZ plot and saw "Face Restore" so I chose it and input some random value to see if it worked, checking the cmd I saw SD was downloading from this link , not sure if it can be used in Automatic1111 though
 
Last edited:

Jimwalrus

Well-Known Member
Sep 15, 2021
1,053
4,035
" "

I think I've found the face restore, long story: I tried XYZ plot and saw "Face Restore" so I chose it and input some random value to see if it worked, checking the cmd I saw SD was downloading from this link , not sure if it can be used in Automatic1111 though
For Face Restore, I'd recommend using GFPGAN over CodeFormers - far better, especially for photorealistic. In A1111, select Face Restoration on the Settings tab. I like to use it at ~0.05 visibility.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,776
I think I've found the face restore, long story: I tried XYZ plot and saw "Face Restore" so I chose it and input some random value to see if it worked, checking the cmd I saw SD was downloading from this link , not sure if it can be used in Automatic1111 though
And if I may with my $0.02 - do switch to ComfyUI rather than A1111. I know at first it will be butthurt for weeks on end, but it will pay off.
 

SDAI-futa

Newbie
Dec 16, 2023
29
31
There are extensions that let you build entire prompt libraries with image previews.
Can you expand on that a bit? What extension that you know help with building images from img2img? I use Pose Net a lot (with different models) and it helps for sure, but always happy to learn of more. Also techniques. Thanks!
 
  • Thinking Face
Reactions: DreamingAway

DreamingAway

Member
Aug 24, 2022
254
661
Can you expand on that a bit? What extension that you know help with building images from img2img? I use Pose Net a lot (with different models) and it helps for sure, but always happy to learn of more. Also techniques. Thanks!
In that quote I meant to say that you can save images with meta data into a local library that is indexed and searchable. It's just a much better user experience than the built in A111 prompt library (which is extremely old.. that prompt tool was in the very first version of A111 and predated all the amazing stuff we have today)
 

Synalon

Member
Jan 31, 2022
225
665
This is my current ui, a couple questions
where is the "Restore face" setting?
while I have indicate in both positive + negative it still generated a nude picture, & sometimes other people appear, how to fix that? thank in advance. View attachment 3188852
You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.
You can find restore face by going to settings.

Settings - face restoration (in post processing) - tick the box

As others have said there are extensions for codeformer/gfpgan to make it show in the UI.
 
Last edited:

SDAI-futa

Newbie
Dec 16, 2023
29
31
There are extensions that let you build entire prompt libraries with image previews.
In that quote I meant to say that you can save images with meta data into a local library that is indexed and searchable. It's just a much better user experience than the built in A111 prompt library (which is extremely old.. that prompt tool was in the very first version of A111 and predated all the amazing stuff we have today)

Actually, share that to then please. Would help.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
I would recommend to only use face restore if absolutely necessary. It used to be needed but since all checkpoints and SD in general has improved most of the time you get better result without face restore. However you can use postprocessing instead and again GFPGAN is superior to codeformer in most cases. You can find it in settings/postprocessing. Select both GFPGAN and Codeformer if you wish, this will add an option in txt2img and img2img tab for postprocessing. They have sliders so you can finetune them and even blend them if you wish, though GFPGAN is way better on it's own over codeformer. I use GFPGAN postprocessing in all my images. It will improve the face and in particular the eyes. If you combine it with after detailer "mediapipe_face_mesh_eyes_only" you can get really stunning eyes.

No face restore and no postprocessingGFPGAN postprocessingGFPGAN postprocessing and adetailer
00136-1633903613.png 00137-1633903613.png 00138-1633903613.png
00136-1633903613 copy.png 00137-1633903613 copy.png 00138-1633903613 copy.png

I didn't manage to demonstrate the effect of adetailer as well as I wanted but take my word for it that it is very useful, though this checkpoint (devlishphotorealism) does produce very nice eyes. With lesser models the effect might be more obvious.

Now that you have a nice face you can use this with either roop or reactor for faceswapping or ip adapter in controlnet for a more consistent character.

To avoid getting other people in your image use tags such as "solo" and in negative "bystanders" or similair. The image ratio and resolution might produce twins and how the model have been trained can produce other people in the image as well. Using embeddings or loras such as easy negative or similar have a tendency to give unwanted side effects so this could also be the cause.

In regards to nudes or dressed this has also to do with how the model have been trained. You can try "NSFW" in negative and/or "undressed" and in positive try "dressed". Instead of using "big round breasts" use "big round bust" as this indicates dressed breasts.
 
Last edited: