[Stable Diffusion] Prompt Sharing and Learning Thread

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
Gents, am I reading this correctly, there is a new GUI for stable diffusion? Has anyone tried it? I am a bit lazy as lacking the motivation for this.



View attachment 2631075
I think it's a different software separate to Stable Diffusion just like Dall E and Midjourney. After a quick search I read that it's people from the Stable-Ai studio who created Stable Diffusion, "an open source version of text to image Ai without any content filter" . Notice that there is not even a whisper about stable diffusion on that github page nor checkpoints etc . In short it's a different model to Stable Diffusion.
Stable Diffusion 1.5 is the model that we are using here and the different checkpoints is still the same model, namely SD1.5 but trained for specific concepts and styles and characters etc. In more clear wording a checkpoint is the training state of Stable Diffusion 1.5 after certain or specific training. Checkpoint as in partway point or control point or milestone. To answer the main question, I figure what you have found is a GUI for Stable-AI, not stable diffusion. They say it's their version of Dream Studio.
 
Last edited:
  • Like
Reactions: Sepheyer

Dagg0th

Member
Jan 20, 2022
208
2,043
Hi guys

I just wanted to thank everyone for this thread and for introducing me to SD.
I read a lot of tips and suggestions, and from my humble beginnings, learning about Loras, textual inversions, and now with ControlNet, I'm having a blast.

Upscale render
00073-741644114.png

Original with prompts:
You don't have permission to view the spoiler content. Log in or register now.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
Hi guys

I just wanted to thank everyone for this thread and for introducing me to SD.
I read a lot of tips and suggestions, and from my humble beginnings, learning about Loras, textual inversions, and now with ControlNet, I'm having a blast.

Upscale render
View attachment 2632917

Original with prompts:
You don't have permission to view the spoiler content. Log in or register now.
Awesome, looking great. :)(y)
 

Sharinel

Active Member
Dec 23, 2018
519
2,151
Step by step of how I use inpainting.

I tried out this prompt yesterday :-

slideshow (forward, back, left, right) view of a 25 year old sexy gorgeous woman Anna Morrison, square jaw, high cheekbones, huge breasts, wearing intricate bodystocking, masterpiece, extremely detailed, 8k, subsurface scattering, (strutting down a futuristic catwalk) , jewellery

and got the following image
00125-1551101312.png

The middle image wasn't too bad but the faces to the left and right came from horror movies. So I dragged the image into PNG Info and then clicked the button to send to img2img (which transfers the seed) and from there sent it to inpainting.

I then masked out the part of the image I wanted to change

1684485783061.png

and set up inpainting as such

1684485881241.png
After trying a few times I figured out that denoising had to be set to 0.4-0.5. Any lower and the horror face didn't change, any higher and it tried to fit another tiny woman into the face of the woman on the left. Got it to my satisfaction(cont)
 

prezs

New Member
Apr 22, 2019
5
4
I was trying to inpaint some clothing (this model really likes nudes, or maybe because I put "gigantic breasts" into the prompt, don't judge me I've only been doing this for 3 days and I wanted to keep the size) and I kept getting nipple cutouts like this
00031-4219112953.png
So I put ((nipples)) into the negative and they actually went inside LMAO
00032-4219112953.png
Any advice on how to make the suit opaque instead of blending with skin?
 

me3

Member
Dec 31, 2016
316
708
Considering the amount of time just playing around with SD it might be worth posting something. Unfortunately i don't think the prompt will do much good as it uses a personal TI, still might be inspire some bright spark in someone. 00020-1058504346.png

Also, can make some pretty sizable images even on very low spec cards
Oops! We ran into some problems.
The uploaded file is too large for the server to process.
So having to convert this one, hopefully not affecting things too badly 00004-424474235.jpg

This is the initial image, before all the outpainting and scaling, prompts should be in the png info (hoping that survives posting)
00001-1953282673.png
 
Jul 27, 2021
233
1,342
Have been messing about with:
Checkpoint:
VAE:
Upscaler:
Extensions:

+

+


Positive prompt:
Beautiful male, __portrait-type__, __artist-anime____hair-color__, __eyecolor__, __clothing-male__, __headwear-male__, (masterpiece, best quality, high quality, highres, ultra-detailed), __forest-type__, __flower__

Beautiful female, __portrait-type__, __artist-anime____hair-color__, __eyecolor__, __clothing-female__, __headwear-female__, (masterpiece, best quality, high quality, highres, ultra-detailed), __forest-type__, __flower__

Negative prompt:
bad_prompt_version2, bad-artist-anime, bad-hands-5, bad-image-v2,

00008-1616394514-Beautiful female, Leaning, Chiho Aoshimapurple, dark brown, colored eyeliner,...png


00009-3903331729-Beautiful female, 3_4 shot, Armin Hansenlight brown, grey, cream eyeliner, sh...png

00027-1599072971-Beautiful male, Lying down, Kobayashi Kiyochikalight blonde, dark brown, suit...png
00030-124219067-ocean, Draco Dwarf Galaxy, Eta Carinae Nebula, Alhena Star, Lunar, (masterpiec...png
 
Last edited:

Jimwalrus

Active Member
Sep 15, 2021
930
3,423
Have been messing about with:
Checkpoint:
VAE:
Upscaler:
Extensions:

+

+


Positive prompt:
Beautiful male, __portrait-type__, __artist-anime____hair-color__, __eyecolor__, __clothing-male__, __headwear-male__, (masterpiece, best quality, high quality, highres, ultra-detailed), __forest-type__, __flower__

Beautiful female, __portrait-type__, __artist-anime____hair-color__, __eyecolor__, __clothing-female__, __headwear-female__, (masterpiece, best quality, high quality, highres, ultra-detailed), __forest-type__, __flower__

Negative prompt:
bad_prompt_version2, bad-artist-anime, bad-hands-5, bad-image-v2,

View attachment 2635999


View attachment 2636000

View attachment 2636004
View attachment 2636033
I'm not familiar with the use of underscores before and after tokens in prompts - what effect do they have?
 
Jul 27, 2021
233
1,342
I'm not familiar with the use of underscores before and after tokens in prompts - what effect do they have?
You should download this extension:


Afterwards download this and place it in the right location:


It's best to download this as well, because it shows options when you type: __



Examples:
firefox_nHIqSLjp9N.png firefox_B9SFeP7WHw.png


It let's you use wildcards, meaning that for example if you want to generate something but are unsure about what clothing you want to use for a female character it will randomly pick one from a huge list of options.

Examples:
1684584788474.png
1684584813606.png
 

me3

Member
Dec 31, 2016
316
708
Jimwalrus i have done some test, and for what i see, 30000 step tend to do chromatic aberration at mid cfg.
so i decide to try different train, and i finish to get my best at 3000 step.
I share setting, maybe if you want to try.

15-20 images good quality, close-up and full body (i use 768x768)

Create embedding:
Number of vectors per token : number of image/2.2 (rounded at high value)

Train setting:
Gradient Clipping :norm
Batch size : 1 or 2 (depends on vram)
Gradient accumulation steps : 1
Embedding Learning rate : 0.005:100, 3.09:500, 1.8:700, 2.06:900, 3.269:1000, 1.05:1500, 0.06:2200, 0.9
Max steps : 3000

With these setting i get good result in about 30 mins, and usable TI from low to high cfg.

I made different ti for testing with same setting on different model, for what i see do a train with f222 model get better result for realistic.
Those learning rates seem insane. General advice seems to be to have a pretty low rate to (presumably) hit the vector weights you want and to maintain flexibility. LR like those might work very well for things you want want to keep very "fixed", ie like a style you'd not want to make changes to or a very distinct subject, like just the face of a person with fix expression/features.

If you'd use some of those LR just for a single Epoch you'd have your subject learned, so i'd imagen using it for 3000 steps on 15-20 images you'd beaten that data in really hard. So you'd probably have a good likeness to your dataset but would need very high weight modifiers to your prompts to change this, if you could at all.

(sorry about dragging up a 1,5month old post but reading through all pages takes a while, even just skimming through some of it)
 
  • Like
Reactions: Jimwalrus

HardcoreCuddler

Engaged Member
Aug 4, 2020
2,402
3,088
Those learning rates seem insane. General advice seems to be to have a pretty low rate to (presumably) hit the vector weights you want and to maintain flexibility. LR like those might work very well for things you want want to keep very "fixed", ie like a style you'd not want to make changes to or a very distinct subject, like just the face of a person with fix expression/features.

If you'd use some of those LR just for a single Epoch you'd have your subject learned, so i'd imagen using it for 3000 steps on 15-20 images you'd beaten that data in really hard. So you'd probably have a good likeness to your dataset but would need very high weight modifiers to your prompts to change this, if you could at all.

(sorry about dragging up a 1,5month old post but reading through all pages takes a while, even just skimming through some of it)
LR all depends on errors afaik.
If the difference of errors (difference between what you get and what you want) between epochs isn't high (you're not making progress) you need a bigger LR.
If the difference of errors between epochs varies greatly and possibly randomly (you're getting all over the place results) you need lesser LR.
Keep in mind "bigger" and "lesser" can mean anything from +-0.001 to +-0.1 LR. You really don't know until you've tried everything, though usually these values are adjusted vvveeeeryyy slowly.
A very small LR isn't always good though, because it can get you stuck in what my prof called "pits" in the error field (I don't remember the dimension definitions of the field, but the Y axis were the errors). Basicaly, a higher LR would allow you to skip or get out of those pits. Still, the issue with high LR is that those pits may be all that's available in a situation, so you may be missing the global minimums (deepest pits, which may be very small in 'diameter') because high LR doesn't have a very good resolution.
That's literaly all I remember from my AI class lol.
TL;DR: Try a jumble of LR's until you run out of things to try, unless you know some super advanced math that not even my prof understands properly.
 

me3

Member
Dec 31, 2016
316
708
LR all depends on errors afaik.
If the difference of errors (difference between what you get and what you want) between epochs isn't high (you're not making progress) you need a bigger LR.
If the difference of errors between epochs varies greatly and possibly randomly (you're getting all over the place results) you need lesser LR.
Keep in mind "bigger" and "lesser" can mean anything from +-0.001 to +-0.1 LR. You really don't know until you've tried everything, though usually these values are adjusted vvveeeeryyy slowly.
A very small LR isn't always good though, because it can get you stuck in what my prof called "pits" in the error field (I don't remember the dimension definitions of the field, but the Y axis were the errors). Basicaly, a higher LR would allow you to skip or get out of those pits. Still, the issue with high LR is that those pits may be all that's available in a situation, so you may be missing the global minimums (deepest pits, which may be very small in 'diameter') because high LR doesn't have a very good resolution.
That's literaly all I remember from my AI class lol.
TL;DR: Try a jumble of LR's until you run out of things to try, unless you know some super advanced math that not even my prof understands properly.
Your initiating text should come into play as well, with the basic idea that it puts you "close" to where you should be so you shouldn't need to do that much jumping around, More complex the subject the more complex a search though.
Given how many TI, Loras etc people are posting on sites that are having wide effects on things they shouldn't and are difficult to work with/combine, I'm worried that ppl see the quick results without considering side effects. So instead of "fine tuning" and carefully adding to the composition, it's just being brute forced in. Which might make it difficult to have things working together.
 

me3

Member
Dec 31, 2016
316
708
Since i mentioned single epoch training:
1epoch.png

Left is from the training set, right is the first image generated using a 1 epoch TI using a 0.9 (i think) LR. (prompt was basically just "<name> bikini location detailed face and eyes", not sure the detailed face/eyes was needed, i just didn't remove them from the prompt that was already there, being lazy and all that)
 
  • Like
Reactions: Jimwalrus

me3

Member
Dec 31, 2016
316
708
Since i came across the kendra lora in here when trying to catch up with the thread. I think mine might be slightly broken, can't quite put my finger on it, but something seems a bit off....hmm
Kendra (1).png Kendra (2).png Kendra (3).png

A couple of misses that potentially can be saved with some in/out painting:
You don't have permission to view the spoiler content. Log in or register now.

(sorry for any crushed dreams and nightmares)
(edited to link the post with the lora)
 
Last edited:
  • Like
Reactions: Mr-Fox

sharlotte

Member
Jan 10, 2019
268
1,440
This Kendra: Yikes, I'd say your prompts might need rework. Though to be honest, she of the LORA does not look much like Kendra Lust, but more like Lisa Ann on the images available in civitai.

Anyway, back to the spinning wheel (Ferris in this case): 00060-3212408967-a haselblad bokeh ((photograph)) of a stunning woman,  (high detailed skin_1....png
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
This Kendra: Yikes, I'd say your prompts might need rework. Though to be honest, she of the LORA does not look much like Kendra Lust, but more like Lisa Ann on the images available in civitai.

Anyway, back to the spinning wheel (Ferris in this case): View attachment 2640324
I think he might talk about my Lora of Kendra by SMZ-69. Good job on the undead nurses btw ME3. I'd like to spin those wheels Charlotte.. :love: (y)
 
Last edited:
  • Haha
Reactions: sharlotte

me3

Member
Dec 31, 2016
316
708
This Kendra: Yikes, I'd say your prompts might need rework. Though to be honest, she of the LORA does not look much like Kendra Lust, but more like Lisa Ann on the images available in civitai.

Anyway, back to the spinning wheel (Ferris in this case): View attachment 2640324
Mr-Fox is correct, to be fair i should have addressed my post and linked the post with lora.
Probably because of the thousands of images i've been reviewing (and discarding) lately when testing training (oh the cursing and swearing...) lately, but her left hand...seems she doesn't come up short in that area either ;), the ever returning problem.

I think he might talk about my Lora of Kendra by SMZ-69. Good job on the undead nurses btw ME3. I'd like to spin those wheels Charlotte.. :love: (y)
Thanks, i'm just trying different things to see what works and maybe inspire ideas for myself and maybe even others. Even in just what you stick in a prompt