I would recommend to only use what you personally need, don't use what someone else has. Only use the arguments that you need for your specific setup and your usage of stable diffusion. --xformers and potentially --api if you will use the plugin for photoshop. Only add arguments when you try to resolve an issue and then remove it if it's solved by the devs in an update.What argument are you using for stable diffusion(automatic1111)?
I saw someone recommend this " --xformers --precision full --no-half --no-half-vae " I have only use 'xformers'
View attachment 2481998
Stable Diffusion is an AI tool that lets one generate images using either text prompts or other images. It uses datasets to generate new and original visual content. Stable Diffusion users call these datasets models. Using the models requires skill and knowledge; there is an analogy with professional photography - mere button-pressing will lead to unimpressive results. Thus, similarly to photography, one needs to learn to interact with the tool that interacts with the environment. Arguably, then art is created.
Stable Diffusion's actual image generation takes a chunk of time; we all can save each other time, help each other and speed up the learning process if we share promps and the images they generated. Thus, this thread is primarily a prompt/nrompt (negative prompt) sharing thread with images and grids being secondary to prompts.
Please share your prompts/nrompts, images and grids. Do not share just the images as there are better threads for this.
Rules:
Guides and Tutorials:
- Stable Diffusion only. If you would like to use other tools, please create new threads for the respective tools.
- Images without prompts are absolute taboo and will be reported as breaking the rules. This is not AI-art sharing thread. The AI-sharing thead is here: link. A qualifying post must have:
- model name with a link to get it.
- prompt/nrompt. If the nrompt (negative prompt) is empty, do expressly state so. Please use spoilers for the promts.
- a grid of images attached, ideally 6x5 so readers can get a feel what kind of images that prompt will generate.
- actual image or images. Please use spoilers for all but one image. Images should be resized to fit the page. Any image requiring scrolling might end being reported on. Please avoid situations where the viewers need to scroll across one image
- Points 2.1, 2.2, 2.3, 2.4 can be ignored if your image contains meta for the prompts, seed, model name, etc. These are mostly *.png files. A jpeg file doesn't contain prompt meta and hence an automatic violation of the requirements - subject to administrative action.
- Tutorials, ideas, suggestions are great. Discussion is OK.
- Trolling or garbage quality will be removed. Nudity is great, weird and freaky content is prooobably not. Cringe and gross content are definitely not.
- If sharing someone else's work always credit the creator if known. Else: "Creator: unclaimed". The only exception to the rule: the image has the creator's watermark.
- If you are the creator of an unclaimed image, you have to post a link with your original work in order to claim the credit.
- This is not an art thread, it is for learning. For sharing your AI art go here
Installing Stable Diffusion
Stable Diffusion is a "backend", to simplify. The front ends to Stable Diffusion are:
WebUi by AUTOMATIC1111Install this - this is what 100% folks need and want.The release page is here:You must be registered to see the linksHow to install (if you never used "git clone") here:You must be registered to see the linksComfyUI by comfyanonymous (hat tip to Elefy for calling this one out)Brand new and super-promising. Pipeline-based workflow. Doesn't have all the features that WebUI has. Install it only after you spend a few month in WebUI.The release page is here:You must be registered to see the links
Good practices - if you are just starting!
Guides
- Enable setting where prompts are attached to your PNG files automatically. This is in Settings: " Save text information about generation parameters as chunks to png files = on". Then you can open any of your PNG files with a text editor and find the settings that got you that image. Enable this right away.
- Add "git pull" at the very beginning of your "webui-user.bat". If you disagree and think this is a bad practice because version control, then this advice is not for you, you are fine.
- Leverage the automatic image import tool: link
- Save your grids to the image folders instead of a separate folder. Much easier to organize things this way. If you disagree and have a different workflow, then this advice is not for you, you are fine.
- Always have "Restore faces = on". Here is why: link
- Learn to use "Script -> XYZ plot" as early as possible. This will save you massive amount of time early on. Here is what it means and how to: link
- Check out the optimization flags that you can run the Stable Diffusion with:
You must be registered to see the links. You might want to have the [--xformers] on and a few others.- Checkout the prompt book: link
link How to install and use LORA
(i.e. zero in on cameltoe, poking nipples, etc.)by Mr-Fox link Glossary by Jimwalrus link Text control by Mr-Fox link Post-processing by Mr-Fox link Upscaling by Mr-Fox link Illegal Dark Magic by Mr-Fox link Training LORA
Troubleshooting: post.
Note: 6GB 1660 cards train LORA in 10 hours or longerby Schlongborn You must be registered to see the linksTraining LORA - a much longer, much more detailed post by Mr-Fox link Training LORA - a good setting for Kohya to start with by Mr-Fox link Pipeline-based workflow and its power by Sepheyer link Comfy UI Face-fix workaround by Sepheyer link Comfy UI Hi-Res fix workflow by Sepheyer link Widescreen Renders by Dagg0th link SDXL by Sepheyer
Models
How to use models:You must be registered to see the links
A quick overview how models compare using the same seed, prompt and settings: link and link
You must be registered to see the links
You must be registered to see the links
You must be registered to see the links
You must be registered to see the links
You must be registered to see the links
Embeddings
How to use embeddings:You must be registered to see the links
You must be registered to see the linkslink
To include a couple of the things I mentioned if you install SD on a Linux Distribution, from my experience. I run SD on a recent Linux Mint distribution.Installing Stable Diffusion
Stable Diffusion is a "backend", to simplify. The front ends to Stable Diffusion are:
Sweet! I added your post to the original post's guide section.To include a couple of the things I mentioned if you install SD on a Linux Distribution, from my experience. I run SD on a recent Linux Mint distribution.
You want to make sure you're using the propitiatory graphic drivers (usually on Linux there's an open source alternative by default). In Mint, this is simple to do through the Driver Manager, I am not sure how you'd switch in other Linux distributions but look for it.
- To prevent freezes from memory leaks (this was pretty bad when I started out, froze the entire system), install tcmalloc:
- sudo apt install libgoogle-perftools-dev (the dev indicates you might need source code repositories if you don't have those turned on)
2. export LD_PRELOAD=libtcmalloc.so in the SD folder on your system
source:You must be registered to see the links- Run SD with --medvram flag. Not a big hit to performance, and prevents vram crashes from higher rez or more realistic checkpoints
- Likewise, run with --xformers flag. You will see the xformers error stops showing up. This speeds up the whole proccess and further helps with memory crashes (for me, I was able to get over 600x800 rez on realistic models finally). On more recent Linux distributions, xformers are built in, we just need to tell SD to actually use them.
Obviously the above worked for me, on my distribution, on my desktop with my hardware. The above are generic enough that I believe it's just general good practice for anyone running on Linux, but let me know if you think otherwise.
That might be theI know it's not Stable Diffusion, but has anyone messed around with Krea AI? The realtime-ish generation is cool.
I think I've found the face restore, long story: I tried XYZ plot and saw "Face Restore" so I chose it and input some random value to see if it worked, checking the cmd I saw SD was downloading from this linkI'm not sure whether the restore face is there any longer. A very good tool for face is adetailer in Automatic1111. Just look for adetailer in the 'extensions' tab to install. There are loads of videos about it - this one is for the install for instance (You must be registered to see the links).
For Face Restore, I'd recommend using GFPGAN over CodeFormers - far better, especially for photorealistic. In A1111, select Face Restoration on the Settings tab. I like to use it at ~0.05 visibility."You must be registered to see the links"
I think I've found the face restore, long story: I tried XYZ plot and saw "Face Restore" so I chose it and input some random value to see if it worked, checking the cmd I saw SD was downloading from this linkYou must be registered to see the links, not sure if it can be used in Automatic1111 though
And if I may with my $0.02 - do switch to ComfyUI rather than A1111. I know at first it will be butthurt for weeks on end, but it will pay off.I think I've found the face restore, long story: I tried XYZ plot and saw "Face Restore" so I chose it and input some random value to see if it worked, checking the cmd I saw SD was downloading from this linkYou must be registered to see the links, not sure if it can be used in Automatic1111 though
If I wanted butthurt for weeks on end, I'd forget my wife's birthday again...And if I may with my $0.02 - do switch to ComfyUI rather than A1111. I know at first it will be butthurt for weeks on end, but it will pay off.
Can you expand on that a bit? What extension that you know help with building images from img2img? I use Pose Net a lot (with different models) and it helps for sure, but always happy to learn of more. Also techniques. Thanks!There are extensions that let you build entire prompt libraries with image previews.
In that quote I meant to say that you can save images with meta data into a local library that is indexed and searchable. It's just a much better user experience than the built in A111 prompt library (which is extremely old.. that prompt tool was in the very first version of A111 and predated all the amazing stuff we have today)Can you expand on that a bit? What extension that you know help with building images from img2img? I use Pose Net a lot (with different models) and it helps for sure, but always happy to learn of more. Also techniques. Thanks!
You can find restore face by going to settings.This is my current ui, a couple questions
where is the "Restore face" setting?
while I have indicate in both positive + negative it still generated a nude picture, & sometimes other people appear, how to fix that? thank in advance. View attachment 3188852
You don't have permission to view the spoiler content. Log in or register now.You don't have permission to view the spoiler content. Log in or register now.You don't have permission to view the spoiler content. Log in or register now.
There are extensions that let you build entire prompt libraries with image previews.
In that quote I meant to say that you can save images with meta data into a local library that is indexed and searchable. It's just a much better user experience than the built in A111 prompt library (which is extremely old.. that prompt tool was in the very first version of A111 and predated all the amazing stuff we have today)
There are many.Actually, share that to then please. Would help.