[Stable Diffusion] Prompt Sharing and Learning Thread

Sepheyer

Well-Known Member
Dec 21, 2020
1,523
3,589
Do DAZ LORAs make sense?

So, Schlongborn has this LORA tutorial (link in the original post's guides section or here) where he trains a LORA off his DAZ girl.

I finally got to test the tutorial and the LORA, and heck yes, I find these DAZ-LORAs surprisingly good.

Here is the character from his training set:

018.png

And here are a bunch of 512x512 SD renders:

a_16335_.png a_16336_.png a_16337_.png
a_16341_.png a_16342_.png a_16343_.png

So, yeah, if anyone's on a fence about should he or shouldn't invest time into a DAZ/HS2/VAM-based LORA, here's a test of how well it actually works.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Do DAZ LORAs make sense?

So, Schlongborn has this LORA tutorial (link in the original post's guides section or here) where he trains a LORA off his DAZ girl.

I finally got to test the tutorial and the LORA, and heck yes, I find these DAZ-LORAs surprisingly good.

Here is the character from his training set:

View attachment 2908484

And here are a bunch of 512x512 SD renders:

View attachment 2908475 View attachment 2908476 View attachment 2908477
View attachment 2908480 View attachment 2908481 View attachment 2908482

So, yeah, if anyone's on a fence about should he or shouldn't invest time into a DAZ/HS2/VAM-based LORA, here's a test of how well it actually works.
Don't forget or overlook my Kendra Lora that is also a Daz3d character Lora. ;)

Source Image Example:
dembe2d-80f09b12-9c2e-46e9-9f80-da56627bc00a.png

You don't have permission to view the spoiler content. Log in or register now.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,523
3,589
Don't forget or overlook my Kendra Lora that is also a Daz3d character Lora. ;)

Source Image Example:
View attachment 2908601

You don't have permission to view the spoiler content. Log in or register now.
Oh snap, it never registered in my mind that Kendra's LORA was built off a DAZ girl to start with. The renders were so lifelike, the thought never crossed my mind. Great job!
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
any sd1.5 based model that creates nice looking nipples? I don't want to use a LoRA for that, it effects the face of girl LoRAs I use
It would be easier to help if you had a specific example, uploaded an image etc. My advice is to work on the prompt and try to find a good seed. Take a look at my prompts to see some examples of tags and weights etc for breasts and nipples. After you have adjusted your prompt and are starting to see a good trend in the results, generate batches to find a decent seed. Use xyz plot to find the settings, cfg, steps and sampler etc. When you have both a good prompt and a decent seed use xyz plot to compare different checkpoints and decide wich one you like best. You don't necessarily need to do it in this exact order, just be methodical in your workflow and use all the tools you have to your disposal to compare and find good settings and checkpoint etc.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Oh snap, it never registered in my mind that Kendra's LORA was built off a DAZ girl to start with. The renders were so lifelike, the thought never crossed my mind. Great job!
I have only talked about it at nauseam.. :p :LOL: Kendra is a creation by the awesome SMZ-69 that I ahemm "borrowed"..
He has more characters like Kendra. I have thought of making more Loras based on them and also from other similar creators but I don't want to only do the same thing over and over and I also don't want to be viewed as someone who only rips off other people.





There are ofc many more.
 
  • Red Heart
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,523
3,589
I have only talked about it at nauseam.. :p :LOL: Kendra is a creation by the awesome SMZ-69 that I ahemm "borrowed"..
He has more characters like Kendra. I have thought of making more Loras based on them and also from other similar creators but I don't want to only do the same thing over and over and I also don't want to be viewed as someone who only rips off other people.





There are ofc many more.
Oh, so that's why Kendra renders looked like - "hey, did I see this girl before?" - I kept having that feeling since I have a bunch of SMZ's Laras saved around.
You don't have permission to view the spoiler content. Log in or register now.
 
  • Like
Reactions: Mr-Fox

Dagg0th

Member
Jan 20, 2022
200
1,954
any sd1.5 based model that creates nice looking nipples? I don't want to use a LoRA for that, it effects the face of girl LoRAs I use
There is s model for Adetailer that fixes nipples, well more like inpaint nipples automatically, I haven't use it yet but if this work like the face mpdel, then is a must!
 

me3

Member
Dec 31, 2016
316
708
Do DAZ LORAs make sense?

So, Schlongborn has this LORA tutorial (link in the original post's guides section or here) where he trains a LORA off his DAZ girl.

I finally got to test the tutorial and the LORA, and heck yes, I find these DAZ-LORAs surprisingly good.

Here is the character from his training set:

View attachment 2908484

And here are a bunch of 512x512 SD renders:

View attachment 2908475 View attachment 2908476 View attachment 2908477
View attachment 2908480 View attachment 2908481 View attachment 2908482

So, yeah, if anyone's on a fence about should he or shouldn't invest time into a DAZ/HS2/VAM-based LORA, here's a test of how well it actually works.
One thing about this though, unless they've changed it again, the DAZ ToS something along the lines of it not being allowed to train AI.
Just something to keep in mind if you're a paying customer etc of their software/services
 
  • Like
Reactions: Mr-Fox

Sepheyer

Well-Known Member
Dec 21, 2020
1,523
3,589
One thing about this though, unless they've changed it again, the DAZ ToS something along the lines of it not being allowed to train AI.
Just something to keep in mind if you're a paying customer etc of their software/services
Thanks, good to know so one wouldn't voluntarily admit using DAZ to train a model/LORA and never to make public the source material for the LORA. Outside of self-outing the "Daz Productions, Inc." won't be able to prove in court that one violated the ToS.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,523
3,589
Bros, I'll be grateful for any corrections/additional tips to put into this post:
---
Troubleshooting LORA Training

So, took me a few times to successfully train LORA, part I am a moron, part - older hardware.

First, do rule out the issues with the dataset and use Schlongborn's dataset included in his LORA training post. This dataset works, and given it has only 20 images, you are guaranteed to waste minimal time while troubleshooting. Also, his post includes LORA that you can check against as reference using values 0.7 for model and 1.0 for clip. Here is a ComfyUI workflow that you can just plug and play:
You don't have permission to view the spoiler content. Log in or register now.
Now, if you train a LORA on that dataset, this is what can go wrong:

Getting black render - you used " Network Rank (Dimension)" with value of 1. I am a moron because Schlongborn's post says use 128, but I overlooked it. For some reason "1" is the default for Kohya's September 2023 install and with all those dials I just missed it. Make sure to use at least 128 for this parameter on your initial tries. Same for "Network Alpha", make it 128. I don't know if 128/1 or somesuch will work, I just know that 128/128 works. Why the default is 1/1 is beyond me. Interestingly, this does affect the size of the LORA. The 1/1 gives you around ~10mb, while 128/128 gives you a ~150mb LORA.

Getting unresponsive LORA - i.e. you get images rendered, but you can't tell if it worked because nothing looks like what you'd expect. That's because the training didn't work out. Here's what's up, when LORA trains, the prompt will tell you there is a loss, like this:

loss.png

And if you are getting "loss=NaN" then the LORA gets zeroes for weights. What likely causes this is the "Mixed precision" setting. It should be "no", because your hardware probably doesn't support fp16 or bf16 options for whatever reason. It actually might support it, but given Kohya uses a bunch of third party modules, one of these modules might just incorrectly identify what you have. So, set "Mixed precision=no" and restart the training: if you start having loss equal to some number, you probably fixed the issue. Strangely, "Save precision-fp16" is fine.

Verify LORA. Kohya has a tool - you can check either your own LORA, or whatever LORA you downloaded. Bad LORA's output section will look different and will have zeroes all over the place:

verify.png
 
Last edited:

me3

Member
Dec 31, 2016
316
708
Bros, I'll be grateful for any corrections/additional tips to put into this post:
---


Getting black render - you used " Network Rank (Dimension)" with value of 1. I am a moron because Schlongborn's post says use 128, but I overlooked it. For some reason "1" is the default for Kohya's September 2023 install and with all those dials I just missed it. Make sure to use at least 128 for this parameter on your initial tries. Same for "Network Alpha", make it 128. I don't know if 128/1 or somesuch will work, I just know that 128/128 works. Why the default is 1/1 is beyond me. Interestingly, this does affect the size of the LORA. The 1/1 gives you around ~10mb, while 128/128 gives you a ~150mb LORA.
a simple way to look at rank is like a memory allocation, as a set size for what you're trying to train and how much "details" you can fit. Bit of an issue is judging how much you actually need. Not every type of thing you can train will need the same amount and while having too much mostly only means a bigger file to share/store, it can also mean that the training fits in more things you don't want to

One way to notice too low rank is to save version through out the training and generate images on them afterwards. Putting them in a grid you'll see if concepts suddenly start getting forgotten/replaced as the training progress. That implies that what ever you're trying to train is filling up the "space" you got and new is pushing out old.

If you run you training at pretty low rates and have regular "checkpoints", IE every epoch and low repeat count, you should see a point where images generated would be almost identical over multiple checkpoints. With the correct rates and settings for your training data you should technically not ever really "overtrain". Lower rates and lower repetition count is far far better than rushing things.

Getting unresponsive LORA - i.e. you get images rendered, but you can't tell if it worked because nothing looks like what you'd expect. That's because the training didn't work out. Here's what's up, when LORA trains, the prompt will tell you there is a loss, like this:

View attachment 2909179

And if you are getting "loss=NaN" then the LORA gets zeroes for weights. What likely causes this is the "Mixed precision" setting. It should be "no", because your hardware probably doesn't support fp16 or bf16 options for whatever reason. It actually might support it, but given Kohya uses a bunch of third party modules, one of these modules might just incorrectly identify what you have. So, set "Mixed precision=no" and restart the training: if you start having loss equal to some number, you probably fixed the issue. Strangely, "Save precision-fp16" is fine.
The list of things that can cause "nothing seemed to be learned" is very long. Going from the very obvious image/caption related to more fun things like bugs in the code. Which is a real pain to figure out. Network rank can also cause it.
One "fast" way to see if the training is working is to use the sample images, yes it will slow down training slightly since it need to stop to create the image. However if it means spending 10-20 sec to see if the training is progressing as expected or waiting an hour to see that it failed from the start, it's worth the delay. Sample images usually won't be perfect but you should easily see that it's at least close to expected
 
  • Like
Reactions: devilkkw and Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Bros, I'll be grateful for any corrections/additional tips to put into this post:
---
Troubleshooting LORA Training

So, took me a few times to successfully train LORA, part I am a moron, part - older hardware.

First, do rule out the issues with the dataset and use Schlongborn's dataset included in his LORA training post. This dataset works, and given it has only 20 images, you are guaranteed to waste minimal time while troubleshooting. Also, his post includes LORA that you can check against as reference using values 0.7 for model and 1.0 for clip. Here is a ComfyUI workflow that you can just plug and play:
You don't have permission to view the spoiler content. Log in or register now.
Now, if you train a LORA on that dataset, this is what can go wrong:

Getting black render - you used " Network Rank (Dimension)" with value of 1. I am a moron because Schlongborn's post says use 128, but I overlooked it. For some reason "1" is the default for Kohya's September 2023 install and with all those dials I just missed it. Make sure to use at least 128 for this parameter on your initial tries. Same for "Network Alpha", make it 128. I don't know if 128/1 or somesuch will work, I just know that 128/128 works. Why the default is 1/1 is beyond me. Interestingly, this does affect the size of the LORA. The 1/1 gives you around ~10mb, while 128/128 gives you a ~150mb LORA.

Getting unresponsive LORA - i.e. you get images rendered, but you can't tell if it worked because nothing looks like what you'd expect. That's because the training didn't work out. Here's what's up, when LORA trains, the prompt will tell you there is a loss, like this:

View attachment 2909179

And if you are getting "loss=NaN" then the LORA gets zeroes for weights. What likely causes this is the "Mixed precision" setting. It should be "no", because your hardware probably doesn't support fp16 or bf16 options for whatever reason. It actually might support it, but given Kohya uses a bunch of third party modules, one of these modules might just incorrectly identify what you have. So, set "Mixed precision=no" and restart the training: if you start having loss equal to some number, you probably fixed the issue. Strangely, "Save precision-fp16" is fine.

Verify LORA. Kohya has a tool - you can check either your own LORA, or whatever LORA you downloaded. Bad LORA's output section will look different and will have zeroes all over the place:

View attachment 2909185
Don't forget or overlook the awesome Lora training guide on rentry I often link to. It was a huge help for me and it gets updated on regular basis as new knowledge, tools and other development progresses. So when something new comes out he usually updates his guide with a section about this new thing and he follows up with his conclusion after doing tests etc.
I have not seen anything remotely close to this guide anywhere. Most just post their "guide" and abandon it next minute.

 

Sepheyer

Well-Known Member
Dec 21, 2020
1,523
3,589
Don't forget or overlook the awesome Lora training guide on rentry I often link to. It was a huge help for me and it gets updated on regular basis as new knowledge, tools and other development progresses. So when something new comes out he usually updates his guide with a section about this new thing and he follows up with his conclusion after doing tests etc.
I have not seen anything remotely close to this guide anywhere. Most just post their "guide" and abandon it next minute.

Sweet! I added it to the guides section in the original post.
 
  • Like
Reactions: Mr-Fox and Dagg0th

Sepheyer

Well-Known Member
Dec 21, 2020
1,523
3,589
Don't forget or overlook the awesome Lora training guide on rentry I often link to. It was a huge help for me and it gets updated on regular basis as new knowledge, tools and other development progresses. So when something new comes out he usually updates his guide with a section about this new thing and he follows up with his conclusion after doing tests etc.
I have not seen anything remotely close to this guide anywhere. Most just post their "guide" and abandon it next minute.

Sweet! I added it to the guides section in the original post.
 
  • Like
Reactions: devilkkw and Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
a simple way to look at rank is like a memory allocation, as a set size for what you're trying to train and how much "details" you can fit. Bit of an issue is judging how much you actually need. Not every type of thing you can train will need the same amount and while having too much mostly only means a bigger file to share/store, it can also mean that the training fits in more things you don't want to

One way to notice too low rank is to save version through out the training and generate images on them afterwards. Putting them in a grid you'll see if concepts suddenly start getting forgotten/replaced as the training progress. That implies that what ever you're trying to train is filling up the "space" you got and new is pushing out old.

If you run you training at pretty low rates and have regular "checkpoints", IE every epoch and low repeat count, you should see a point where images generated would be almost identical over multiple checkpoints. With the correct rates and settings for your training data you should technically not ever really "overtrain". Lower rates and lower repetition count is far far better than rushing things.


The list of things that can cause "nothing seemed to be learned" is very long. Going from the very obvious image/caption related to more fun things like bugs in the code. Which is a real pain to figure out. Network rank can also cause it.
One "fast" way to see if the training is working is to use the sample images, yes it will slow down training slightly since it need to stop to create the image. However if it means spending 10-20 sec to see if the training is progressing as expected or waiting an hour to see that it failed from the start, it's worth the delay. Sample images usually won't be perfect but you should easily see that it's at least close to expected
I agree with everything you said, very good info. The most important thing I learned is to spend enough time preparing the source images you will train on and the prompt for each. Though I have read that the training is way more sensitive to a bad prompts over bad images, it was very obvious the difference when I got better images. Also a slow learning rate is fundamental to a good training. In addition to slow learning rate settings there are tricks to "dampen" the training rate with other settings that will have a secondary retarding effect. The guide I linked to have good info about this.
I used a little " noise_offset " that gives the images more dynamic range, more colorful and better details with the secondary effect that it dampens the learning rate.
 
Last edited:
  • Like
Reactions: devilkkw

Artiour

Member
Sep 24, 2017
252
1,081
so the other day I made a Jessa Rhoads LoRA with JessaR as trigger word, it took me some time to realize "Jessar" might be some Indian or Pakistani name
7cc06129287a47df9a51cfe5aa44bf2b_high.png
so this time I remade it with Jessa_Rhoads as a trigger word, I swear to God I wrote nothing about a guitar or a band nor I used an image with a guitar in it and this is the result
Capture.JPG
Capture.JPG
also there is something with the trigger word Blonka (short for blonde Kasie), I only tagged one image Blonka and the result was the girl in my avatar picture, all the images generated with Blonka had that kind of heavy makeup and glowing eyes, I created other LoRAs with the same thing (a single image tagged Blonka) and had same result
any explanation for this phenomena?
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,523
3,589
Gents, would anyone have any guides about "adding" to LORA? Say I have a LORA with 20 images, then I want to keep adding another 20 images later and so on, until I get full 100 images in, but I would do so incrementally over a course of a week.

Why is this important? Say I want to conduct experiments:

- Check what happens if LORA's training dataset lacks the actual subject but only the backgrounds from the images that do contain the subject. Naturally, such images would not have the LORA's nametag.

- Check what happens if one LORA is used to contain multiple subjects each with their own tags. Say 20 images tagged "Sophia" and 20 images tagged "Kendra". Would I be able to use this one master LORA to successfully have all my girls in?

And thus incrementally adding to one's LORA would allow to test for these incremental additions using significantly less time.
 
  • Thinking Face
Reactions: Mr-Fox and Artiour

me3

Member
Dec 31, 2016
316
708
Gents, would anyone have any guides about "adding" to LORA? Say I have a LORA with 20 images, then I want to keep adding another 20 images later and so on, until I get full 100 images in, but I would do so incrementally over a course of a week.

Why is this important? Say I want to conduct experiments:

- Check what happens if LORA's training dataset lacks the actual subject but only the backgrounds from the images that do contain the subject. Naturally, such images would not have the LORA's nametag.

- Check what happens if one LORA is used to contain multiple subjects each with their own tags. Say 20 images tagged "Sophia" and 20 images tagged "Kendra". Would I be able to use this one master LORA to successfully have all my girls in?

And thus incrementally adding to one's LORA would allow to test for these incremental additions using significantly less time.
Not sure what you mean by "adding to".
IE
  1. you train a lora on 20 images, then later you select that lora as starting point and continue the training with
    1. 20 new images OR
    2. the 20 old images along with 20 new ones (totaling 40 images)
1.2 would probably better off just retraining from scratch since doubling the amount of images would potentially change your learning quite a bit.

With 1.1 i suspect it would depend on how images were captioned, but i think it would "confuse" the AI. Generally the training picks up little to noting of the background unless there's "room left" to do so and "there's nothing left to learn" with the subject. Your images should have as different backgrounds as possible in your training set anyway which makes it less likely for the AI to associate background elements with the subject you're training.

As for multiple subjects in one lora, that should work just fine, assuming you can keep it all within the "size limits" of the lora. You have loras with subject wearing different clothing or styles, i can't remember seeing any with completely different people, but the training setup for it would be the same. The different image sets would just be in different folders and tagged accordingly.
(offtopic: One question is how "efficient" weights are being stored in the lora. IE if you have one character with and without a hat, would it then store just the differences between them or would it be stored as 2 complete "weight sets")
You'd probably need to make sure your sets are rather close in how fast the AI picks them up though, you can probably fix some of it with having different repeat counts for each set up just because repeats * images is the same it doesn't mean it'll learn at the same.
Through testing it seems that even running the same set at something like 4 repeats for 4 epochs doesn't give the same end result as doing 2 reps for 8 epochs. This could be due to "something" in the code itself that make it behave slightly different, i'd rather not dig into all that. If you're lucky enough to do batches as well, that throws in another curve ball apparently, anyway....

I'd planed to put multiple subjects into one lora but not gotten around to it yet as i were struggling to find a common setting so they'd all learn at roughly the same rate. I considered training one lora, on one set then "append" a new set to it, i started on it with the lora i posted a while back, but an update broke things so i can't even launch the trainer and i've not gotten back to it yet.
I remember seeing something about lora merging at the time too but not sure how that works
 
  • Like
Reactions: Mr-Fox

rogue_69

Newbie
Nov 9, 2021
78
235
Thanks, good to know so one wouldn't voluntarily admit using DAZ to train a model/LORA and never to make public the source material for the LORA. Outside of self-outing the "Daz Productions, Inc." won't be able to prove in court that one violated the ToS.
I was messing around with training Daz Loras for a while, but I came up with a better workflow. Use a Daz Gen 8 headshot in Stable with prompts, take an image you like from that and use it with Face Transfer in Daz (use Face Transfer Shapes if you have it). Either used the texture from the Face Tranfer or use another texture, you're mainly just using the new Daz model you created for nose, mouth, eyes, and face consistency in Stable. Render out a bunch of images in Daz and bring them over to Stable and make images using the same prompts you used before.
Here is a quick example I threw together because I was bored. If I put more work into it, I could have gotten even more consistent results.
ZomaPoses08.png
ZomaPoses09.png
ZomaPoses10.png ZomaPoses11.png
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,523
3,589
I was messing around with training Daz Loras for a while, but I came up with a better workflow. Use a Daz Gen 8 headshot in Stable with prompts, take an image you like from that and use it with Face Transfer in Daz (use Face Transfer Shapes if you have it). Either used the texture from the Face Tranfer or use another texture, you're mainly just using the new Daz model you created for nose, mouth, eyes, and face consistency in Stable. Render out a bunch of images in Daz and bring them over to Stable and make images using the same prompts you used before.
Here is a quick example I threw together because I was bored. If I put more work into it, I could have gotten even more consistent results.
View attachment 2910699
View attachment 2910713
View attachment 2910707 View attachment 2910710
Actually already, as is, this method shows unparalled consistency. Please can you elaborate what you mean by: "Use a Daz Gen 8 headshot in Stable with prompts". I am not a DAZ guy, so I struggle parsing if you use "DAZGEN8" as a token in the SD prompt, or if you meant something else.