[Stable Diffusion] Prompt Sharing and Learning Thread

sharlotte

Member
Jan 10, 2019
303
1,606
You could try latent couple and use a template to have your character be towards the 'bottom' of your picture, leaving the headroom you want. Loads of videos but as usual this guy makes it 'easy':
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
You could try latent couple and use a template to have your character be towards the 'bottom' of your picture, leaving the headroom you want. Loads of videos but as usual this guy makes it 'easy':
Yeah I know about this method, I hoped there was a simple prompting way of doing this though. If it's really just about some space above the head with not too complex stuff going on, I just tried photoshops generative expansion, it did the job fantastically.
 

picobyte

Active Member
Oct 20, 2017
639
711
When I create a picture with a character in the center, but I want some headroom above the character to be not occupied of it, because I want to have space for a title/text, how can I tell SD to leave that room specifically? I have a great seed but the character is almost filling the entire top.
hkennereth mentioned some, but there is another option:
In ComfyUI there is a multiareaconditioning node, webui should have something similar. Use that and conditiongsetarea. with a specific prompt per region. It is tough to get right for placing a complex scene, but for leaving area open may work.
Also via drawn masks you can do something similar.
This image contains a ComyUI worksheet, that uses this. The setup for this particular image was actually way too complex, it rarely worked.
(you can ignore the most ksampler steps at the top, they are not in use, also I didn't fully understand the advanced ksampler at that time).
Akabur_AA_063342_449790143959824_00432.png
 
Last edited:
  • Like
Reactions: Fuchsschweif

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
hkennereth mentioned some, but there is another option:
In ComfyUI there is a multiareaconditioning node, webui should have something similar. Use that and conditiongsetarea. with a specific prompt per region. It is tough to get right for placing a complex scene, but for leaving area open may work.
Also via drawn masks you can do something similar.
Right now I just use Photoshops generative expand, it works perfect for easy things like the sky or something. For the future I will try to learn how to do proper sketches so that SD recognizes them and I can use the latent image as input for the ksampler, I think these sketches can be made in 1-2 minutes and they give maximum control over the scenery, character pose, placement and so on.
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
Here's a cool tip: If you have photoshop and just want to remove something that you don't like on your picture (e.g. weird artifacts on clothing), you can just draw a shape around that part, use generative fill, and just input nothing. Just hit enter. Photoshop will usually then try to remove it.

So I could easily get rid off some weird buttons and stuff that SD put onto my character's jacket.
 

sharlotte

Member
Jan 10, 2019
303
1,606
For people with less Vram, this Forge (very similar to SD) could be a good way to go:




Started testing this morning, with my RTX3060 it seems faster and I haven't had yet any issues with running out of memory. From what I can see in the cmd file, whilst generating, it frees memory between steps. No need to set low vram and other commands in the user bat file as it detects automatically the graphic card used. Common modules are pre-installed (like controlnet, kohya HRFix...). Super easy install.

6 minutes to generate 2 2048*2048 below with these settings:
Capt.JPG
00004-1114181344.jpg 00005-1114181345.jpg
 

devilkkw

Member
Mar 17, 2021
324
1,094
seem good, but i wait official release before test.

I see many post on reddit about stable cascade. someone have tested? what's your impression?
 
  • Like
Reactions: modine2021

PandaRepublic

Member
May 18, 2018
213
2,147
Is there a new tutorial to make LoRA's? It seems like my old settings don't work anymore. If so how many repeats do you use now? I used to use 100 now it just turns out horribly. Also, withs some of the LoRA's I create I get a NaNs error. Either that or something is wrong with Kohya on my PC.
00016-2821311847.png 00019-3895043416.png
 

devilkkw

Member
Mar 17, 2021
324
1,094
didn't try the new one. and what is cascade?
look
Is there a new tutorial to make LoRA's? It seems like my old settings don't work anymore. If so how many repeats do you use now? I used to use 100 now it just turns out horribly. Also, withs some of the LoRA's I create I get a NaNs error. Either that or something is wrong with Kohya on my PC.
View attachment 3372823 View attachment 3372825
Are you train style or person? you need to share more detail, kohya have many settings and understanding what's wrong in your training with so less information is hard. And on what model are you train? is XL or 1.5?
 

modine2021

Member
May 20, 2021
417
1,389
look

Are you train style or person? you need to share more detail, kohya have many settings and understanding what's wrong in your training with so less information is hard. And on what model are you train? is XL or 1.5?
hmmm... uses notebook. i don't use that
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
hmmm... uses notebook. i don't use that
Kohoya ss is a web ui specifically for training different types of models. It has dreambooth, lora, ti etc. Some useful tools for the preparation process such as making the captions. I assume that you are doing the training in A1111.
Kohya ss is far superior as far as I remember. I have only trained one lora that I consider a success, though I have done many practice runs and experimentation runs etc.
For just trying out training a lora I suppose A1111 is fine for this but if you are serious about it I think kohya ss is the better option.
Aitrepeneur kohyas ss lora tutorial:

The basic settings in this video is only to get you started, you need to figure out the best settings for your own scenario yourself.

This rentry guide is very useful:

Which ckpt model you choose to train your lora on is very important, it's best with a model that is responsive and consistent.
Don't use an ancestral sampler, you want consistency. Choose one of the well established classics, Euler, DPM++ 2m Karras, DPM++ SDE Karras etc.
The next thing is which optimizer you use, AdamW8bit or AdamW is good as a start.
Next is the learning rate, don't use a too fast setting as it tend to make an overtrained lora.
Then "Net dim (Rank)" settings, 128 for both is a good base but you can try lowering it slightly depending on what type of lora you are training such as style or character etc.
Learn about the concept or topic of dampening. It refers to settings that has the secondary effect of slowing down the learning rate.
There is a section about it in the rentry guide.


Something to try is a bit of "denoise offset", it makes the image sharper and more colorful but if you overdo it, the image can look "burnt". I used a very low setting (0.1) for my lora with good result but it's not strictly necessary. Consider using clipskip 2 as it might give better results.

There are some general recommendations in the section " Starting settings and "optimal settings" .


Kohya staring settings.png

Something to keep in mind is that if you use bad images to start with you will not get a good result so be very selective in choosing your images for the data set. I think it's best to not use too many, 20-30 ish is a good number.
The captions are very important as well. It's fine to use the tool in kohya for auto captions as a starting point but it's well worth the time to go through them manually to adjust them. If your gpu can handle it go with 768 instead of 512 resolution.
It's not necessary to use 1:1 images it can be either portrait or lanscape just make sure to upscale and crop etc manually with photoshop or similar so you don't have a bunch of variations. you can have a some variations ofc, just not too much. Make sure to enable "photobuckets", this will take care of the variations for you.

Good luck.
 
Last edited:

Sharinel

Active Member
Dec 23, 2018
598
2,511
It's not necessary to use 1:1 images it can be either portrait or lanscape just make sure to upscale and crop etc manually with photoshop or similar so you don't have a bunch of variations. you can have a some variations ofc, just not too much. Make sure to enable "photobuckets", this will take care of the variations for you.

Good luck.
On this very last point, I like to use to crop any pics I have to the correct resolution for training. most of my loras these days are on SDXL so I train to 1024x1024.

Apart from that, I'm going to use everything that El Foxy has said here as I didn't do half of that :)