[Stable Diffusion] Prompt Sharing and Learning Thread

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Does anyone have a tutorial on how to make a LyCORIS model?
I'm also interested in this. If you track down any good info source please share. With a search I found these links:



 
  • Like
Reactions: PandaRepublic

me3

Member
Dec 31, 2016
316
708
In a basic sense it's not much different from training TI or Lora. Most of the difference is related to the optimizer and not really the "type".

I have found some odd behavior in training recently and i'm not sure what is causing it.
Since i've been testing alot of different values/settings i created a "base" file which i reloaded each time to reset things incase i'd accidentally update a setting etc. I ran multiple trainings on that base file to see that the results were the same each time, generated images had just the slight random nature you'd expect like strains of hair being slightly different etc.
After quite a while of testing i started seeing results that didn't make much sense so i retrained on just the base file and images generated was nowhere close to what they initially was. Character was suddenly 3x as old and at least twice the body weight. Have spent over a day now trying to get back to the starting point i have no idea what's "gone wrong". Nothing updated, training data has remained unchanged etc so somewhat confused.

Another thing i noticed is related to epoch settings.
With the base file i had the training thankfully only needed 1 epoch, IE the -000001 file generated using kohya-ss, but i had left the setting to run for 5 epochs just incases. When i changed this setting without changing anything else, the images generated on the trained file (still using the first epoch file) changed as well.
So i did multiple trainings with setting the "max epoch" to different values then generating images with the same seed using the 000001 file from each of those trainings.
So keep in mind, these image sets are ALL generated on using the 000001 file from trainings done on the exact same settings, same data etc, only thing that's changed is how many epochs the training would have run for. That is the number in the group titles, so where it say "ep_2" means it was set to run for 2 epochs, but it was still the first epoch file used to generated images.
xyz_grid-0007.png

This was repeatable too, so running the same generating on the same seed gave the same results so it isn't due to randomness in the generating. For some seeds the difference was much greater as well, IE one image would have just the clothing and not even the person, while wearing the same clothing in another.

I'm not sure if this is due to randomness in the optimzier (Prodigy) or if it's related to the scheduler (cosine) or if it's some kind of bug in the code.
It could be that the "cosine wave", to call it that, gets stretched from start to end, so that more epochs affects growth/decay, no idea tbh. Because it takes so long to test it i can't really look into it.
Regardless of the cause the result for "us" is that it can affect our training results and it's a value you'd not expect to do that besides when things stop.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
In a basic sense it's not much different from training TI or Lora. Most of the difference is related to the optimizer and not really the "type".

I have found some odd behavior in training recently and i'm not sure what is causing it.
Since i've been testing alot of different values/settings i created a "base" file which i reloaded each time to reset things incase i'd accidentally update a setting etc. I ran multiple trainings on that base file to see that the results were the same each time, generated images had just the slight random nature you'd expect like strains of hair being slightly different etc.
After quite a while of testing i started seeing results that didn't make much sense so i retrained on just the base file and images generated was nowhere close to what they initially was. Character was suddenly 3x as old and at least twice the body weight. Have spent over a day now trying to get back to the starting point i have no idea what's "gone wrong". Nothing updated, training data has remained unchanged etc so somewhat confused.

Another thing i noticed is related to epoch settings.
With the base file i had the training thankfully only needed 1 epoch, IE the -000001 file generated using kohya-ss, but i had left the setting to run for 5 epochs just incases. When i changed this setting without changing anything else, the images generated on the trained file (still using the first epoch file) changed as well.
So i did multiple trainings with setting the "max epoch" to different values then generating images with the same seed using the 000001 file from each of those trainings.
So keep in mind, these image sets are ALL generated on using the 000001 file from trainings done on the exact same settings, same data etc, only thing that's changed is how many epochs the training would have run for. That is the number in the group titles, so where it say "ep_2" means it was set to run for 2 epochs, but it was still the first epoch file used to generated images.
View attachment 2746744

This was repeatable too, so running the same generating on the same seed gave the same results so it isn't due to randomness in the generating. For some seeds the difference was much greater as well, IE one image would have just the clothing and not even the person, while wearing the same clothing in another.

I'm not sure if this is due to randomness in the optimzier (Prodigy) or if it's related to the scheduler (cosine) or if it's some kind of bug in the code.
It could be that the "cosine wave", to call it that, gets stretched from start to end, so that more epochs affects growth/decay, no idea tbh. Because it takes so long to test it i can't really look into it.
Regardless of the cause the result for "us" is that it can affect our training results and it's a value you'd not expect to do that besides when things stop.
This was very useful information. Thank you!:)(y)
The more we can reduce variables, the more control we'll have over the training and the end result.
 

FreakyHokage

Member
Sep 26, 2017
261
356
Does anyone have a tutorial on how to make a LyCORIS model?
You need the extension to use LyCORIS. Go here click "code" and copy the link then go to the extensions tab on automatic1111 and click "Install from URL" paste the link under " URL for extension's git repository " and click apply and wait for it to install then click reload UI. All LyCORIS models go into your LoRA folder.
 

maikijo

New Member
Jun 28, 2023
3
15
Tell me, please, why is it so? I write in prompts "one person" or "1 person", but in the end, a lot of people still appear in the picture.

imgonline-com-ua-Resize-uT7w7uv1IH5UW1.jpg imgonline-com-ua-Resize-UFeT3xiBZnBX3AJ.jpg
 

Dagg0th

Member
Jan 20, 2022
200
1,954
Tell me, please, why is it so? I write in prompts "one person" or "1 person", but in the end, a lot of people still appear in the picture.

View attachment 2753073 View attachment 2753074
Your images are not valid or are edited, so I can't check what are your prompts to see what are you doing.

One probable cause is that you are using different width and height, try a 512x768

Put "1girl, solo", in positive, use weight if necessary, (1girl:1.5) for example.
 

me3

Member
Dec 31, 2016
316
708
I was gonna post a grid showing of image repetition and epochs, just incase someone found it useful to just see how it changes.
I'd done 10 runs where each run had an increase in how many times the training images was repeated per epoch and did 10 epochs each.
Even compressing it down to jpg won't let me post it before it becomes too distorted (it's only 11k x 10k pix as well), so instead i'll break the thread rules and just post a one of the images from the trainings that i "lost" the file for and been unable to recreate. Prompt wouldn't help even me as the lycoris file is long gone so :(
Still, looks like a pretty nice boat/yacht though, bit out of focus
lostintraining.jpg
 

Jimwalrus

Active Member
Sep 15, 2021
853
3,179
I was gonna post a grid showing of image repetition and epochs, just incase someone found it useful to just see how it changes.
I'd done 10 runs where each run had an increase in how many times the training images was repeated per epoch and did 10 epochs each.
Even compressing it down to jpg won't let me post it before it becomes too distorted (it's only 11k x 10k pix as well), so instead i'll break the thread rules and just post a one of the images from the trainings that i "lost" the file for and been unable to recreate. Prompt wouldn't help even me as the lycoris file is long gone so :(
Still, looks like a pretty nice boat/yacht though, bit out of focus
View attachment 2754060
What boat?
Oh, the motorboat...
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Tell me, please, why is it so? I write in prompts "one person" or "1 person", but in the end, a lot of people still appear in the picture.

View attachment 2753073 View attachment 2753074
You need to post the png file. You can find it in stable-diffusion-webui\outputs\txt2img-images, then the date it was generated.
Then we can see the generation data and better help you out. It's also against the thread guidelines to not include the prompt.
Don't worry no one is going to try rip you off or something stupid like that. The more info we have the better we can help.
Also it's the spirit and purpose of this thread to share prompts for learning purposes. We are very serious about not copying anyone else's work without giving proper credit etc.
 

onyx

Member
Aug 6, 2016
128
217
Apologize if this is a repeat question, but is there a way to specify Girl A is Lora:X and Girl B is Lora:Y?

I'm trying to work backwards from this example ( ):
tmpbb89eiwi (1).jpg

Is there a way to base the rear girl off one Lora and the front off another, or does it just blend whatever models you add to the prompts?
 

Sharinel

Member
Dec 23, 2018
481
1,999
Apologize if this is a repeat question, but is there a way to specify Girl A is Lora:X and Girl B is Lora:Y?

I'm trying to work backwards from this example ( ):
View attachment 2755371

Is there a way to base the rear girl off one Lora and the front off another, or does it just blend whatever models you add to the prompts?
Yeah you can use something like Regional Prompter (there's a really good overview of it on the github page)



So you could have a prompt similar to :-

2 people on a couch in a living room
BREAK One girl with Lora A
BREAK One girl with Lora B
 

FreakyHokage

Member
Sep 26, 2017
261
356
You need the extension to use LyCORIS. Go here click "code" and copy the link then go to the extensions tab on automatic1111 and click "Install from URL" paste the link under " URL for extension's git repository " and click apply and wait for it to install then click reload UI. All LyCORIS models go into your LoRA folder.
Yeah, I completely miss read the comment lol. I thought they were asking how to use LyCORIS not make a LyCORIS lol
 
  • Like
Reactions: Mr-Fox

me3

Member
Dec 31, 2016
316
708
First of all i've never used Comfyui before so probably a lot of basics done horribly wrong, even more than usual.
Second, never used SDXL so no idea how prompting differs.
But it was the only thing i could get the model to even load in without OOM so needs must...
So with the ideal situation of using multiple unknowns i don't really no if the base model is working correctly, the UI setup is even remotely behaving well, nor if the refiner being applied in any way close to what it's meant to.

So here some test images, base and refiner "pairs"...
base_output_00007_.png refiner_output_00007_.png

base_output_00017_.png refiner_output_00017_.png

just a base image to show that there still seem to be an issue with multiple subjects (didn't try to fix it with just prompts) the rest of the image didn't seem too bad thought.
base_output_00015_.png
 
  • Like
  • Red Heart
Reactions: Mr-Fox and Sepheyer