any good prompt to do lift up/roll up skirt/dress better
anytime I do with controllnet
it tends to pulldown or have weird shape like panties around pussy
any good prompt to do lift up/roll up skirt/dress better
anytime I do with controllnet
it tends to pulldown or have weird shape like panties around pussy
If you want to prevent it then maybe try to use them Lora's in the negative prompt. Just a thought, since you are not being clear about what you are actually looking for. When you want something very specific it's probably best to either train a Lora yourself or use controlnet.
I doubt. I think faces, nipples and hands have almost the same root cause which transcends the models and is rather a part of the methodology. I.e. the SD models actually create crappy faces - we do use face restore modules to fix the faces. Hence, in theory there should be a nipple restore module to make nipples look natural.
Oh snap, it never registered in my mind that Kendra's LORA was built off a DAZ girl to start with. The renders were so lifelike, the thought never crossed my mind. Great job!
It would be easier to help if you had a specific example, uploaded an image etc. My advice is to work on the prompt and try to find a good seed. Take a look at my prompts to see some examples of tags and weights etc for breasts and nipples. After you have adjusted your prompt and are starting to see a good trend in the results, generate batches to find a decent seed. Use xyz plot to find the settings, cfg, steps and sampler etc. When you have both a good prompt and a decent seed use xyz plot to compare different checkpoints and decide wich one you like best. You don't necessarily need to do it in this exact order, just be methodical in your workflow and use all the tools you have to your disposal to compare and find good settings and checkpoint etc.
Oh snap, it never registered in my mind that Kendra's LORA was built off a DAZ girl to start with. The renders were so lifelike, the thought never crossed my mind. Great job!
I have only talked about it at nauseam.. Kendra is a creation by the awesome SMZ-69 that I ahemm "borrowed"..
He has more characters like Kendra. I have thought of making more Loras based on them and also from other similar creators but I don't want to only do the same thing over and over and I also don't want to be viewed as someone who only rips off other people.
I have only talked about it at nauseam.. Kendra is a creation by the awesome SMZ-69 that I ahemm "borrowed"..
He has more characters like Kendra. I have thought of making more Loras based on them and also from other similar creators but I don't want to only do the same thing over and over and I also don't want to be viewed as someone who only rips off other people.
Oh, so that's why Kendra renders looked like - "hey, did I see this girl before?" - I kept having that feeling since I have a bunch of SMZ's Laras saved around.
There is s model for Adetailer that fixes nipples, well more like inpaint nipples automatically, I haven't use it yet but if this work like the face mpdel, then is a must!
One thing about this though, unless they've changed it again, the DAZ ToS something along the lines of it not being allowed to train AI.
Just something to keep in mind if you're a paying customer etc of their software/services
One thing about this though, unless they've changed it again, the DAZ ToS something along the lines of it not being allowed to train AI.
Just something to keep in mind if you're a paying customer etc of their software/services
Thanks, good to know so one wouldn't voluntarily admit using DAZ to train a model/LORA and never to make public the source material for the LORA. Outside of self-outing the "Daz Productions, Inc." won't be able to prove in court that one violated the ToS.
Bros, I'll be grateful for any corrections/additional tips to put into this post:
--- Troubleshooting LORA Training
So, took me a few times to successfully train LORA, part I am a moron, part - older hardware.
First, do rule out the issues with the dataset and use Schlongborn's dataset included in his LORA training post. This dataset works, and given it has only 20 images, you are guaranteed to waste minimal time while troubleshooting. Also, his post includes LORA that you can check against as reference using values 0.7 for model and 1.0 for clip. Here is a ComfyUI workflow that you can just plug and play:
Now, if you train a LORA on that dataset, this is what can go wrong:
Getting black render- you used " Network Rank (Dimension)" with value of 1. I am a moron because Schlongborn's post says use 128, but I overlooked it. For some reason "1" is the default for Kohya's September 2023 install and with all those dials I just missed it. Make sure to use at least 128 for this parameter on your initial tries. Same for "Network Alpha", make it 128. I don't know if 128/1 or somesuch will work, I just know that 128/128 works. Why the default is 1/1 is beyond me. Interestingly, this does affect the size of the LORA. The 1/1 gives you around ~10mb, while 128/128 gives you a ~150mb LORA.
Getting unresponsive LORA - i.e. you get images rendered, but you can't tell if it worked because nothing looks like what you'd expect. That's because the training didn't work out. Here's what's up, when LORA trains, the prompt will tell you there is a loss, like this:
And if you are getting "loss=NaN" then the LORA gets zeroes for weights. What likely causes this is the "Mixed precision" setting. It should be "no", because your hardware probably doesn't support fp16 or bf16 options for whatever reason. It actually might support it, but given Kohya uses a bunch of third party modules, one of these modules might just incorrectly identify what you have. So, set "Mixed precision=no" and restart the training: if you start having loss equal to some number, you probably fixed the issue. Strangely, "Save precision-fp16" is fine.
Verify LORA. Kohya has a tool - you can check either your own LORA, or whatever LORA you downloaded. Bad LORA's output section will look different and will have zeroes all over the place:
Bros, I'll be grateful for any corrections/additional tips to put into this post:
---
Getting black render- you used " Network Rank (Dimension)" with value of 1. I am a moron because Schlongborn's post says use 128, but I overlooked it. For some reason "1" is the default for Kohya's September 2023 install and with all those dials I just missed it. Make sure to use at least 128 for this parameter on your initial tries. Same for "Network Alpha", make it 128. I don't know if 128/1 or somesuch will work, I just know that 128/128 works. Why the default is 1/1 is beyond me. Interestingly, this does affect the size of the LORA. The 1/1 gives you around ~10mb, while 128/128 gives you a ~150mb LORA.
a simple way to look at rank is like a memory allocation, as a set size for what you're trying to train and how much "details" you can fit. Bit of an issue is judging how much you actually need. Not every type of thing you can train will need the same amount and while having too much mostly only means a bigger file to share/store, it can also mean that the training fits in more things you don't want to
One way to notice too low rank is to save version through out the training and generate images on them afterwards. Putting them in a grid you'll see if concepts suddenly start getting forgotten/replaced as the training progress. That implies that what ever you're trying to train is filling up the "space" you got and new is pushing out old.
If you run you training at pretty low rates and have regular "checkpoints", IE every epoch and low repeat count, you should see a point where images generated would be almost identical over multiple checkpoints. With the correct rates and settings for your training data you should technically not ever really "overtrain". Lower rates and lower repetition count is far far better than rushing things.
Getting unresponsive LORA - i.e. you get images rendered, but you can't tell if it worked because nothing looks like what you'd expect. That's because the training didn't work out. Here's what's up, when LORA trains, the prompt will tell you there is a loss, like this:
And if you are getting "loss=NaN" then the LORA gets zeroes for weights. What likely causes this is the "Mixed precision" setting. It should be "no", because your hardware probably doesn't support fp16 or bf16 options for whatever reason. It actually might support it, but given Kohya uses a bunch of third party modules, one of these modules might just incorrectly identify what you have. So, set "Mixed precision=no" and restart the training: if you start having loss equal to some number, you probably fixed the issue. Strangely, "Save precision-fp16" is fine.
The list of things that can cause "nothing seemed to be learned" is very long. Going from the very obvious image/caption related to more fun things like bugs in the code. Which is a real pain to figure out. Network rank can also cause it.
One "fast" way to see if the training is working is to use the sample images, yes it will slow down training slightly since it need to stop to create the image. However if it means spending 10-20 sec to see if the training is progressing as expected or waiting an hour to see that it failed from the start, it's worth the delay. Sample images usually won't be perfect but you should easily see that it's at least close to expected