• To improve security, we will soon start forcing password resets for any account that uses a weak password on the next login. If you have a weak password or a defunct email, please update it now to prevent future disruption.

[Stable Diffusion] Prompt Sharing and Learning Thread

me3

Member
Dec 31, 2016
316
708
Using a similar ipdapter setup to the one i posted here and using the split head image used to create this clip can create some interesting effects.
Depending on which "layer" you put the image and different weights etc you can create "twins" where one reflects more the inner split image and the other takes the outer skin. Or you can create more of a "two-face" where its face is an half of each.

Some poor examples from me just testing, i've noticed i got a horrible lack of background image options so probably need to spend some time creating images more tailored for each "layer".

_ComfyUI_temp_okygp_00001_.jpg _ComfyUI_temp_okygp_00022_.jpg

More images in the thumbnails to not make the post a scrolling nightmare.
_ComfyUI_temp_okygp_00001_.jpg _ComfyUI_temp_okygp_00002_.jpg _ComfyUI_temp_okygp_00022_.jpg _ComfyUI_temp_okygp_00035_.jpg _ComfyUI_temp_okygp_00041_.jpg
 
Last edited:
  • Like
Reactions: VanMortis

DreamingAway

Member
Aug 24, 2022
222
591
Use this button here to save prompts/styles

View attachment 3172218

It's way easier to just save your images with metadata included then use them as prompt lookups. There are extensions that let you build entire prompt libraries with image previews.

Generating images make for a much better prompt library then that drop down. IMO.

--

If you wanna quickly swap between saving meta data and purging it you can add the "Save text information about generation parameters as chunks to png files" to your main page and click it off and on between generations to quickly toggle meta data on and off.

(It's element name is "enable_pnginfo" )

--

In case it's not obvious you can copy any image into the PNG tab and then hit "Send to txt2img" to quickly load identical prompt / settings off an image.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,519
3,581
The Rodent bro dropped a video showcasing a denoiser node for ComfyUI:

I tried - the thing is rather inconsistent. But when it works it fucking rocks:
workflow (3).png
 

me3

Member
Dec 31, 2016
316
708
The Rodent bro dropped a video showcasing a denoiser node for ComfyUI:

I tried - the thing is rather inconsistent. But when it works it fucking rocks:
View attachment 3184444
i tried "unsampling" to try and get consistency when making animations, i think i briefly mentioned planing to try it in a earlier post, but as you said, it can be a bit of a hit and miss. Specially if you're doing hundreds of images and can't really tweak things for each one. It's worth looking into for ppl though, definitely has its uses.
There's a controlnet-lllite model (kohyas "controlnet" version) that might be worth looking at if you're using images as "input" too, it's a bit misleadingly called blur. Seen some very good results on it, even with multiple passes on extremely "blurred" images.

Has anyone tried hypertile?
So far in my limited testing it seems to fair rather badly at what you'd normal think of with tiling. With a 512x512 tile it used more vram per tile than it took to make the 1024x1024 image...
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,519
3,581
i tried "unsampling" to try and get consistency when making animations, i think i briefly mentioned planing to try it in a earlier post, but as you said, it can be a bit of a hit and miss. Specially if you're doing hundreds of images and can't really tweak things for each one. It's worth looking into for ppl though, definitely has its uses.
There's a controlnet-lllite model (kohyas "controlnet" version) that might be worth looking at if you're using images as "input" too, it's a bit misleadingly called blur. Seen some very good results on it, even with multiple passes on extremely "blurred" images.

Has anyone tried hypertile?
So far in my limited testing it seems to fair rather badly at what you'd normal think of with tiling. With a 512x512 tile it used more vram per tile than it took to make the 1024x1024 image...
Can you post the hypertile workflow? There are a few things that are hypertile for me, so yea.
 

me3

Member
Dec 31, 2016
316
708
Can you post the hypertile workflow? There are a few things that are hypertile for me, so yea.
it's not much of a workflow thing really, it's just a simple node you put "between" model points, like with loras, i think it's a base node.

Should show up if you just start typing in the name in the node search
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,519
3,581
q_q

IDK.

If you come across a workflow, please ping me. I really have no idea how the devs mean to use the HT node. Like giving me a carrot for the snowman but forgetting to tell that it must be its nose - left to my own devices I'd stick the carrot elsewhere.

Here is what I get for merely plugging it in - there is a defect where there are black stripes on the face. So, yea, I wish there was a manual of how and for what this is intended to be used.
You don't have permission to view the spoiler content. Log in or register now.
 
Last edited:

felldude

Member
Aug 26, 2017
467
1,429
So I tested the Adam 32 bit training, I mean I have had those cuda.dll's installed may as well try them out.
(Using libbitsandbytes_cuda118.dll)
Cosine with the exact same learning rate as the SD 1.5 model 4800 steps over 4 epochs.

Which do you think is BF16 and which is FP32 (Same seed and lora value)

Learning Concept is Tanlines

You don't have permission to view the spoiler content. Log in or register now.

You don't have permission to view the spoiler content. Log in or register now.

ComfyUI_00829_.jpg
 

namhoang909

Newbie
Apr 22, 2017
87
47
What argument are you using for stable diffusion(automatic1111)?
I saw someone recommend this " --xformers --precision full --no-half --no-half-vae " I have only use 'xformers'
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,791
What argument are you using for stable diffusion(automatic1111)?
I saw someone recommend this " --xformers --precision full --no-half --no-half-vae " I have only use 'xformers'
I would recommend to only use what you personally need, don't use what someone else has. Only use the arguments that you need for your specific setup and your usage of stable diffusion. --xformers and potentially --api if you will use the plugin for photoshop. Only add arguments when you try to resolve an issue and then remove it if it's solved by the devs in an update.
 

SDAI-futa

Newbie
Dec 16, 2023
28
30
View attachment 2481998

Stable Diffusion is an AI tool that lets one generate images using either text prompts or other images. It uses datasets to generate new and original visual content. Stable Diffusion users call these datasets models. Using the models requires skill and knowledge; there is an analogy with professional photography - mere button-pressing will lead to unimpressive results. Thus, similarly to photography, one needs to learn to interact with the tool that interacts with the environment. Arguably, then art is created.

Stable Diffusion's actual image generation takes a chunk of time; we all can save each other time, help each other and speed up the learning process if we share promps and the images they generated. Thus, this thread is primarily a prompt/nrompt (negative prompt) sharing thread with images and grids being secondary to prompts.

Please share your prompts/nrompts, images and grids. Do not share just the images as there are better threads for this.

Rules:
  1. Stable Diffusion only. If you would like to use other tools, please create new threads for the respective tools.
  2. Images without prompts are absolute taboo and will be reported as breaking the rules. This is not AI-art sharing thread. The AI-sharing thead is here: link. A qualifying post must have:
    1. model name with a link to get it.
    2. prompt/nrompt. If the nrompt (negative prompt) is empty, do expressly state so. Please use spoilers for the promts.
    3. a grid of images attached, ideally 6x5 so readers can get a feel what kind of images that prompt will generate.
    4. actual image or images. Please use spoilers for all but one image. Images should be resized to fit the page. Any image requiring scrolling might end being reported on. Please avoid situations where the viewers need to scroll across one image
    5. Points 2.1, 2.2, 2.3, 2.4 can be ignored if your image contains meta for the prompts, seed, model name, etc. These are mostly *.png files. A jpeg file doesn't contain prompt meta and hence an automatic violation of the requirements - subject to administrative action.
  3. Tutorials, ideas, suggestions are great. Discussion is OK.
  4. Trolling or garbage quality will be removed. Nudity is great, weird and freaky content is prooobably not. Cringe and gross content are definitely not.
  5. If sharing someone else's work always credit the creator if known. Else: "Creator: unclaimed". The only exception to the rule: the image has the creator's watermark.
  6. If you are the creator of an unclaimed image, you have to post a link with your original work in order to claim the credit.
  7. This is not an art thread, it is for learning. For sharing your AI art go here
Guides and Tutorials:

Installing Stable Diffusion
Stable Diffusion is a "backend", to simplify. The front ends to Stable Diffusion are:

WebUi by AUTOMATIC1111
Install this - this is what 100% folks need and want.​
The release page is here:
How to install (if you never used "git clone") here:
ComfyUI by comfyanonymous (hat tip to Elefy for calling this one out)​
Brand new and super-promising. Pipeline-based workflow. Doesn't have all the features that WebUI has. Install it only after you spend a few month in WebUI.​
The release page is here:

Good practices - if you are just starting!
  • Enable setting where prompts are attached to your PNG files automatically. This is in Settings: " Save text information about generation parameters as chunks to png files = on". Then you can open any of your PNG files with a text editor and find the settings that got you that image. Enable this right away.
  • Add "git pull" at the very beginning of your "webui-user.bat". If you disagree and think this is a bad practice because version control, then this advice is not for you, you are fine.
  • Leverage the automatic image import tool: link
  • Save your grids to the image folders instead of a separate folder. Much easier to organize things this way. If you disagree and have a different workflow, then this advice is not for you, you are fine.
  • Always have "Restore faces = on". Here is why: link
  • Learn to use "Script -> XYZ plot" as early as possible. This will save you massive amount of time early on. Here is what it means and how to: link
  • Check out the optimization flags that you can run the Stable Diffusion with: . You might want to have the [--xformers] on and a few others.
  • Checkout the prompt book: link
Guides
linkHow to install and use LORA
(i.e. zero in on cameltoe, poking nipples, etc.)
by Mr-Fox
linkGlossaryby Jimwalrus
linkText controlby Mr-Fox
linkPost-processingby Mr-Fox
linkUpscalingby Mr-Fox
linkIllegal Dark Magicby Mr-Fox
linkTraining LORA

Troubleshooting: post.
Note: 6GB 1660 cards train LORA in 10 hours or longer
by Schlongborn
Training LORA - a much longer, much more detailed postby Mr-Fox
linkTraining LORA - a good setting for Kohya to start withby Mr-Fox
linkPipeline-based workflow and its powerby Sepheyer
linkComfy UI Face-fix workaroundby Sepheyer
linkComfy UI Hi-Res fix workflowby Sepheyer
linkWidescreen Rendersby Dagg0th
linkSDXLby Sepheyer

Models
How to use models:
A quick overview how models compare using the same seed, prompt and settings: link and link






Embeddings
How to use embeddings:
link

I just posted a few simple tips to running SD on Linux. Thanks for putting up this thread I'm going to follow and learn more, share what I know.
 

SDAI-futa

Newbie
Dec 16, 2023
28
30
Installing Stable Diffusion
Stable Diffusion is a "backend", to simplify. The front ends to Stable Diffusion are:
To include a couple of the things I mentioned if you install SD on a Linux Distribution, from my experience. I run SD on a recent Linux Mint distribution.

You want to make sure you're using the propitiatory graphic drivers (usually on Linux there's an open source alternative by default). In Mint, this is simple to do through the Driver Manager, I am not sure how you'd switch in other Linux distributions but look for it.

  1. To prevent freezes from memory leaks (this was pretty bad when I started out, froze the entire system), install tcmalloc:
    1. sudo apt install libgoogle-perftools-dev (the dev indicates you might need source code repositories if you don't have those turned on)
      2. export LD_PRELOAD=libtcmalloc.so in the SD folder on your system

      source:
  2. Run SD with --medvram flag. Not a big hit to performance, and prevents vram crashes from higher rez or more realistic checkpoints
  3. Likewise, run with --xformers flag. You will see the xformers error stops showing up. This speeds up the whole proccess and further helps with memory crashes (for me, I was able to get over 600x800 rez on realistic models finally). On more recent Linux distributions, xformers are built in, we just need to tell SD to actually use them.

Obviously the above worked for me, on my distribution, on my desktop with my hardware. The above are generic enough that I believe it's just general good practice for anyone running on Linux, but let me know if you think otherwise.
 
Last edited:

Sepheyer

Well-Known Member
Dec 21, 2020
1,519
3,581
To include a couple of the things I mentioned if you install SD on a Linux Distribution, from my experience. I run SD on a recent Linux Mint distribution.

You want to make sure you're using the propitiatory graphic drivers (usually on Linux there's an open source alternative by default). In Mint, this is simple to do through the Driver Manager, I am not sure how you'd switch in other Linux distributions but look for it.

  1. To prevent freezes from memory leaks (this was pretty bad when I started out, froze the entire system), install tcmalloc:
    1. sudo apt install libgoogle-perftools-dev (the dev indicates you might need source code repositories if you don't have those turned on)
      2. export LD_PRELOAD=libtcmalloc.so in the SD folder on your system

      source:
  2. Run SD with --medvram flag. Not a big hit to performance, and prevents vram crashes from higher rez or more realistic checkpoints
  3. Likewise, run with --xformers flag. You will see the xformers error stops showing up. This speeds up the whole proccess and further helps with memory crashes (for me, I was able to get over 600x800 rez on realistic models finally). On more recent Linux distributions, xformers are built in, we just need to tell SD to actually use them.

Obviously the above worked for me, on my distribution, on my desktop with my hardware. The above are generic enough that I believe it's just general good practice for anyone running on Linux, but let me know if you think otherwise.
Sweet! I added your post to the original post's guide section.
 

felldude

Member
Aug 26, 2017
467
1,429
I know it's not Stable Diffusion, but has anyone messed around with Krea AI? The realtime-ish generation is cool.
That might be the training.
They claim to have trained from scratch. (Still likely off the AI organized set )
Or maybe even starting older like resnet-50

If you want a fun project, you can do an image to image search on the closest original image in the 600TB image set that SD 1.5 was trained on here


Guess how many times these will come up a source of training for 1girl from x country

Also I manged to find my watermark in the set (From over 10 years ago...ehh 15 years)
Validation from a 60 million image set...lol

Felldude.jpg
 
Last edited:
  • Like
Reactions: devilkkw and Mr-Fox

namhoang909

Newbie
Apr 22, 2017
87
47
This is my current ui, a couple questions
where is the "Restore face" setting?
while I have indicate in both positive + negative it still generated a nude picture, & sometimes other people appear, how to fix that? thank in advance. 1703044857393.png
You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.
 
  • Like
Reactions: Sepheyer