[Stable Diffusion] Prompt Sharing and Learning Thread

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
  • kohya_ss, trying to train a TI.
Ok. I have not done any TI yet so I can't be of any help. I can share links to info sources though.
(Textual Inversion/Hypernetwork Guide)
("--RETARD'S GUIDE TO TEXTUAL INVERSION--")
(Training a Style Embedding in Stable Diffusion with Textual Inversion)

In my experience TI's are almost always problematic so I keep myself to Lora's for now. The same goes for Hypernetworks.
 

sharlotte

Member
Jan 10, 2019
321
1,723
THe best one I've seen (and followed) for training TI is:
It was posted here a few weeks back but probably not on front page. Here the author is creating a character and then training a TI on it and it gets very good results. It's not an overly long process and it gets good results.
 

KingBel

Member
Nov 12, 2017
429
3,493
THe best one I've seen (and followed) for training TI is:
It was posted here a few weeks back but probably not on front page. Here the author is creating a character and then training a TI on it and it gets very good results. It's not an overly long process and it gets good results.
Hi

This is the github link:

Should probably also check out the textual inversion channel on the Unstable Diffusion Discord for lots more resources and tutorials/discussions.
 

me3

Member
Dec 31, 2016
316
708
Ok. I have not done any TI yet so I can't be of any help. I can share links to info sources though.
(Textual Inversion/Hypernetwork Guide)
("--RETARD'S GUIDE TO TEXTUAL INVERSION--")
(Training a Style Embedding in Stable Diffusion with Textual Inversion)

In my experience TI's are almost always problematic so I keep myself to Lora's for now. The same goes for Hypernetworks.
That's sort of the type of "guides" i'm referring to, generally lacking in details or just flat out wrong in many regards, often important ones
THe best one I've seen (and followed) for training TI is:
It was posted here a few weeks back but probably not on front page. Here the author is creating a character and then training a TI on it and it gets very good results. It's not an overly long process and it gets good results.
Hi

This is the github link:

Should probably also check out the textual inversion channel on the Unstable Diffusion Discord for lots more resources and tutorials/discussions.
Used that guide in the beginning for some things but there has to be something horribly wrong with the training explanation, 25 images and just 150 steps simply doesn't work. Someone has pointed that out to the creator as well, but they seem completely unwilling to respond to the issue.
Seemingly the creators own explanations in different places doesn't match up either, making it look like they are mixing up terms and/or settings.

Also, one thing that seem to be very relevant with any training is the actual data used, images, captions and all the settings, however you don't really see people supplying those. If they did it would mean others could replicate the results (assuming the guides were accurate, which i'm starting to doubt in many cases) and then use that as a basis for their own images as they then knew better what to look for during the process.
 
  • Like
Reactions: Mr-Fox

me3

Member
Dec 31, 2016
316
708
This is from a training in a1111.
help.png
I was testing doing a "warm up" like dreambooth etc uses, so every epoch the learning rate increased marginally to about 10% of the steps. Third image says just 25 steps but that's ~3 epochs and tbh i'm struggling to find much difference with the one at 2200 steps. I tried the same setup for a different set of images and it failed completely despite having same amount of images, same "distribution", same simple captioning etc. Unfortunately i don't have any of the results from it but it started with something that would put body builders to shame and when i gave up it was at something that would be between a very successful anorexic and a skeleton...

I can't work out why one worked and one didn't, nor does it really make any logical sense (clearly it does to a computer though so i guess there is something logical), which is why guides should provide the data involved as it (can) make a huge difference in results and at least you know what you got to work with and the target, which make it much easier to find the path.
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
That's sort of the type of "guides" i'm referring to, generally lacking in details or just flat out wrong in many regards, often important ones
Yes this is a prevalent problem in many areas on the internet, people making guides about various things without actually being knowledgeable enough to do so or does it quickly and sloppy.

Also, one thing that seem to be very relevant with any training is the actual data used, images, captions and all the settings, however you don't really see people supplying those. If they did it would mean others could replicate the results (assuming the guides were accurate, which i'm starting to doubt in many cases) and then use that as a basis for their own images as they then knew better what to look for during the process.
Yes this is exactly it. In training Lora's it's the same, source images quality and the caption and setting is key to a good end result.
I used an excellent guide for Lora training with op doing regular updates as he learns and the tools gets updated etc.
He also shares everything about process and data and settings etc. I don't know if there's any crossover for TI training though
(In case your interested).
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
This is from a training in a1111.
View attachment 2659356
I was testing doing a "warm up" like dreambooth etc uses, so every epoch the learning rate increased marginally to about 10% of the steps. Third image says just 25 steps but that's ~3 epochs and tbh i'm struggling to find much difference with the one at 2200 steps. I tried the same setup for a different set of images and it failed completely despite having same amount of images, same "distribution", same simple captioning etc. Unfortunately i don't have any of the results from it but it started with something that would put body builders to shame and when i gave up it was at something that would be between a very successful anorexic and a skeleton...

I can't work out why one worked and one didn't, nor does it really make any logical sense (clearly it does to a computer though so i guess there is something logical), which is why guides should provide the data involved as it (can) make a huge difference in results and at least you know what you got to work with and the target, which make it much easier to find the path.
Beautiful woman. :love:
 

devilkkw

Member
Mar 17, 2021
328
1,115
This is from a training in a1111.
View attachment 2659356
I was testing doing a "warm up" like dreambooth etc uses, so every epoch the learning rate increased marginally to about 10% of the steps. Third image says just 25 steps but that's ~3 epochs and tbh i'm struggling to find much difference with the one at 2200 steps. I tried the same setup for a different set of images and it failed completely despite having same amount of images, same "distribution", same simple captioning etc. Unfortunately i don't have any of the results from it but it started with something that would put body builders to shame and when i gave up it was at something that would be between a very successful anorexic and a skeleton...

I can't work out why one worked and one didn't, nor does it really make any logical sense (clearly it does to a computer though so i guess there is something logical), which is why guides should provide the data involved as it (can) make a huge difference in results and at least you know what you got to work with and the target, which make it much easier to find the path.
posted something about training Textual inversion time ago, i use standard a1111 train.
But what i modify when train people, is the image, i use 768x768, and i cut every background to keep only person i want in image, saving it as png with alpha. Then when trainig i check "usa alpha a loss weight".
Also description come more simple because you are able to describe better the subject, and waste all about background.
Ratio for 3000 step i usally use is: 1.9:200, 0.9:400, 0.4:600, 0.06:800, 0.0005, i save a image and TI every 100 step, and check what is better during training.
Usally from 800 to 1700 step start getting better result, so i check these TI in generation phase and try what really is better.
One trained with this value is Oily Helper, found it in my civitai profile.
 

me3

Member
Dec 31, 2016
316
708
Please don't take this as any harsh critisim as it's not meant that way, but something feels a bit "wrong". It could just be that it's perfectly centered so you get the feeling it's one half just mirrored to make a whole, but you can see there's differences. It's probably just one of those balance and/or ratio things where your mind just responds to it being so centered and aligned
 
  • Like
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,776
Please don't take this as any harsh critisim as it's not meant that way, but something feels a bit "wrong". It could just be that it's perfectly centered so you get the feeling it's one half just mirrored to make a whole, but you can see there's differences. It's probably just one of those balance and/or ratio things where your mind just responds to it being so centered and aligned
There are all kinds of things wrong with the ship, the details aren't where they are supposed to be, etc, etc.
 

fgriff

Newbie
Jul 3, 2018
37
14
Hello,
I'm trying to find a prompt and model for these images, specifcally the lighting/style/colors. Does anyone have any suggestions?

7237707110883052805 (3).jpg 7237707110883052805 (4).jpg

Using a prompt and lora from civitai, I got the following result which is nice, but not quite the same:

00400-2139418103.png
You don't have permission to view the spoiler content. Log in or register now.
 

TitaniumDickDiamondBalls

Well-Known Member
Mar 24, 2019
1,142
4,186
Hello,
I'm trying to find a prompt and model for these images, specifcally the lighting/style/colors. Does anyone have any suggestions?

View attachment 2662296 View attachment 2662297

Using a prompt and lora from civitai, I got the following result which is nice, but not quite the same:

View attachment 2662322
You don't have permission to view the spoiler content. Log in or register now.
My blind guess would be you have different model or one of LoRa versions.
 
  • Haha
Reactions: Sepheyer

me3

Member
Dec 31, 2016
316
708
Hello,
I'm trying to find a prompt and model for these images, specifcally the lighting/style/colors. Does anyone have any suggestions?

View attachment 2662296 View attachment 2662297

Using a prompt and lora from civitai, I got the following result which is nice, but not quite the same:

View attachment 2662322
You don't have permission to view the spoiler content. Log in or register now.
Given that it's seemingly art someone trying to make ppl pay money for on patreon, you'd hope it involves some "private" work and that they aren't just ripping off other creators as well as the ppl stupid enough to pay for "art" that's not unique, easily duplicated and "sold" in unlimited amounts...

You're better off finding your own "style" though, "art", as much as it is, shouldn't be copied/replicated, you should rather find your own take on it.
You could look at finding some lighting loras to "dim" things, or some kind of bluring, seems like you're getting too much/bright colors so some of the weighting might be causing issues and with the lora you're getting oversaturating.

== Updated ==
looking at the lora on civitai and one of the creators images the model perfectWorld_perfectWorldBakedVAE seems like a close match for the background, it's on huggingface.
 
Last edited:
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
Given that it's seemingly art someone trying to make ppl pay money for on patreon, you'd hope it involves some "private" work and that they aren't just ripping off other creators as well as the ppl stupid enough to pay for "art" that's not unique, easily duplicated and "sold" in unlimited amounts...

You're better off finding your own "style" though, "art", as much as it is, shouldn't be copied/replicated, you should rather find your own take on it.
You could look at finding some lighting loras to "dim" things, or some kind of bluring, seems like you're getting too much/bright colors so some of the weighting might be causing issues and with the lora you're getting oversaturating.

== Updated ==
looking at the lora on civitai and one of the creators images the model perfectWorld_perfectWorldBakedVAE seems like a close match for the background, it's on huggingface.
I agree completely. It's impossible to know for sure how any image has been generated without the png info. Finding your own style is part of the fun, however it's a good learning exercise to try replicate other's images first. I would try the most popular typically used checkpoints wich ever theme or style one tries to replicate.
Besides Perfect world, there is the different versions of AbyssOrangeMix, ReV Animated, NeverEnding Dream (NED), Kotosmix and many more. Try the samus loras on civit, , etc.
Try the very popular upscaler that many who make this style uses
.
When using hiresfix always use hiressteps=2x samplesteps, meaning if 20samplesteps use 40hiressteps.
I always recomend 20-30 samplesteps and 40-60 hiressteps in order to conserve the image composition.
Thanks to devilkkw for the tip.
Try to use Restore Faces GFPGAN and/or postprocessing GFPGAN. About the prompt. Write in positive the things you wish to see in your image, write in negative the things you don't wish to see..
Also you can use negative to affect the positive, if you want really large breast put "large breast in positive and small breast in negative.
Adding weight is even more powerful, example:
Positive (large breast:1.2)
Negative (small breast:1.2)
Use a value over 1 for an increase and less than 1 for a decrease.
Don't forget to add terms that describes the genre style and terms that describes the image/photo style such as "wide angle lens", "depth of field", "sharp focus" etc, also terms for the color and light such as (vivid color:1.2) or (vivid color:0.8) for less, (diffused light:1.2) or (diffused light:0.8) for less.
 
Last edited:
  • Like
Reactions: Dagg0th

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,776
Gorgeous :love:

Just a question out of curiosity. Why do you link to Efgypt on Bleeter or Life Invader? And what is Efgypt about?:unsure:
I didn't know that I have anything on Bleeter / LI - they are prolly leaching my twitter. I dont really know what these are.

I post collages on twitter under Efgypt . Scratches my itch for landscapes and IRL commentary.

a_09178_.png
(Comfy UI prompt included.)
 
Last edited:

Halmes

Newbie
May 22, 2017
18
6
I need some help. I want high quality picture. 512x512 every picture comes alright. But the picure is small and quality is shit as shown below.

00056-925732026.png Näyttökuva (3).png

Then i send it to extras to upscale it to 2048x2048. It comes bigger but the quality is still shit. When i put size 1920 x 1080 this happens. 00057-925732026.png
They are all beautiful but i just want one girl in the picture. Hires.fix does not help or i dont know how to use it.