CREATE and FUCK your own AI GIRLFRIEND TRY FOR FREE
x

AI LORAs for Wildeer's Lara Croft (Development Thread)

me3

Member
Dec 31, 2016
316
708
Getting there, at least for anything not full body.
REALLY don't like the large variance caused by seed, not sure how to combat that.
Also for some reason the AI gets VERY hung up on the rips, none of the images have much visible ribs and in no way that much, so not sure why it gets so focused on it :/

_xyz_grid-0008-1880057079.jpg
_xyz_grid-0010-2938595253.jpg
 
  • Like
Reactions: Sepheyer and Mr-Fox

me3

Member
Dec 31, 2016
316
708
So yesterday i unintentionally came across a post about ppl having issues with training sd1.5 models and loras since SDXL was added to training tools. Looking at the timeframe etc this also seems to fit in with much of my own training issues, which has gotten progressively worse with each recent update. Since i've kept almost all the training settings files i've ran some of them again and i'm increasingly certain that "something" is definitely going on that's not down to settings and images.
While the old training wasn't perfect they had a fairly high likeness and i'll let you all judge if something is "wrong" with this small selection...

riiigghht.jpg
 
  • Like
Reactions: Mr-Fox and Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Ghost Lara looks like a karen.. Doesn't she? Like " if any of you so much as make a peep I'm gonna shush you sooo hard".. :LOL:
 

me3

Member
Dec 31, 2016
316
708
So i went through about 6 months of code changes for k_ss and the gui as that was apparently when the timeframe for the issues others had.
Going over changes in unfamiliar code isn't the most accurate thing, i did spot a couple of changes that's of a nature that could screw things up, but mostly the changes hadn't actually be directly for SD lora training. It'd mainly been for XL and merging/finetuning.
So i started testing differently old version and "patching" them to run, many wouldn't run or had conflicting deps.
Eventually i got a "version" to run with various updates and finger crossing, i later found out that others had found that version to be able to run as well...*sigh*....could have saved some time and effort there :(

I ran a training with just mainly default values and not reviewed captions to see if there was any difference and it seem to be rather noticeable.
testgrid.jpg

The cause of the problem seem to be linked to bitsandbytes, if it's just that or a combination of that and some other lib i can't say.
If you have issues training it might be worth setting up and old version and see if things work out better. Version 35.0.0 for bitsandbytes work at least, not sure if any of the updates after that does, not tested yet. Will update with the exact gui tag version when i got access to check as atm i can't remember exactly.

It's a bit odd that XL training wasn't affected, but that uses different learning rates etc which suggests the issue is linked to that. Images did look overtrained in some ways but drastically lowering learning rates didn't fix the issue, so there's something "more" to it.
Hopefully some of the people more familiar with the involved code can figure it out.
 
  • Red Heart
  • Like
Reactions: Mr-Fox and Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
So i went through about 6 months of code changes for k_ss and the gui as that was apparently when the timeframe for the issues others had.
Going over changes in unfamiliar code isn't the most accurate thing, i did spot a couple of changes that's of a nature that could screw things up, but mostly the changes hadn't actually be directly for SD lora training. It'd mainly been for XL and merging/finetuning.
So i started testing differently old version and "patching" them to run, many wouldn't run or had conflicting deps.
Eventually i got a "version" to run with various updates and finger crossing, i later found out that others had found that version to be able to run as well...*sigh*....could have saved some time and effort there :(

I ran a training with just mainly default values and not reviewed captions to see if there was any difference and it seem to be rather noticeable.
View attachment 2992121

The cause of the problem seem to be linked to bitsandbytes, if it's just that or a combination of that and some other lib i can't say.
If you have issues training it might be worth setting up and old version and see if things work out better. Version 35.0.0 for bitsandbytes work at least, not sure if any of the updates after that does, not tested yet. Will update with the exact gui tag version when i got access to check as atm i can't remember exactly.

It's a bit odd that XL training wasn't affected, but that uses different learning rates etc which suggests the issue is linked to that. Images did look overtrained in some ways but drastically lowering learning rates didn't fix the issue, so there's something "more" to it.
Hopefully some of the people more familiar with the involved code can figure it out.
Awesome work. Thank you for the effort and sharing it.
 
  • Like
Reactions: Sepheyer

me3

Member
Dec 31, 2016
316
708
So having worked out some of all the software issues, i've gotten back to trying to work out all the other training issues, but tbh having had training unknowingly broken since april and still constantly trying to get it to work thinking it was the data or settings at fault, i'm kinda fed up with it :/

xyz_grid-0016-664052244.png

Grid above is the same as top row in this, just different seed
lara_grid.jpg

These images are in no way cherrypicked, out of the simple fact the prompt is purely "a woman <lora>", there's no trigger word, i simply forgot to remove that in the comment so ignore that. As the zip filename suggests it's trained on .
I did repeated attempts at training on cyberrealistic, elegance and dreamshaper, all of which i've trained other things on just fine, but they refused to work. All of them created better images in this grid than when trained on. Some of the test images seem to have been generated more towards realistic than "render" which is why they are slightly off, but that's because of no neg prompt etc, so shouldn't be too hard to fix there.
I've not tested this for flexibility or anything, but it generates an actual face which is massive progress from before. There's probably issues and some thing seem to be a bit seed dependent, but i guess we're used to that.
All 7 loras are in

(Edited to add some pointless numbers)
Clearing out files/folders after all this, I deleted over 400 "settings files", but there would have been a lot more cause the version of kohya i have to use doesn't automatically save. ~16000 "test" images from testing the loras (>6gb) (not counting grids) and ~835gb of "discarded" loras...
I know i've cleared out loras and images before but not how many, still i suspect this covers the majority, which is kinda scary :/
 
Last edited:

Sepheyer

Well-Known Member
Dec 21, 2020
1,566
3,746
So having worked out some of all the software issues, i've gotten back to trying to work out all the other training issues, but tbh having had training unknowingly broken since april and still constantly trying to get it to work thinking it was the data or settings at fault, i'm kinda fed up with it :/

View attachment 3000873

Grid above is the same as top row in this, just different seed
View attachment 3000833

These images are in no way cherrypicked, out of the simple fact the prompt is purely "a woman <lora>", there's no trigger word, i simply forgot to remove that in the comment so ignore that. As the zip filename suggests it's trained on .
I did repeated attempts at training on cyberrealistic, elegance and dreamshaper, all of which i've trained other things on just fine, but they refused to work. All of them created better images in this grid than when trained on. Some of the test images seem to have been generated more towards realistic than "render" which is why they are slightly off, but that's because of no neg prompt etc, so shouldn't be too hard to fix there.
I've not tested this for flexibility or anything, but it generates an actual face which is massive progress from before. There's probably issues and some thing seem to be a bit seed dependent, but i guess we're used to that.
All 7 loras are in

(Edited to add some pointless numbers)
Clearing out files/folders after all this, I deleted over 400 "settings files", but there would have been a lot more cause the version of kohya i have to use doesn't automatically save. ~16000 "test" images from testing the loras (>6gb) (not counting grids) and ~835gb of "discarded" loras...
I know i've cleared out loras and images before but not how many, still i suspect this covers the majority, which is kinda scary :/
Do you have any thoughts about how the checkpoint that the LORA is trained on contributes to the end result?

When I look at your grids between different posts I see that Lara retains her likeness regardless of the training checkpoint. The rendering checkpoint / LORA strength seems a much stronger contributor to the end result than the originating checkpoint. But that's my subjective and rather rushed opinion.
 

me3

Member
Dec 31, 2016
316
708
While in theory you should be able to train the same "face" on any model suited for it, there seems to be differences in how easy/well that actually works out. Running the same settings on different models suggests they don't see the facial features the same. Face might get slightly "puffier" in one model, have a rounder chin in another, that sort of thing. Why this is i can't really say as i don't have a detailed understanding of how the underlying "values" related, but theory is one thing and the actual practical result is another and that suggests there will be a varying degree of differences.

A lora trained on one model will most likely look different in another because it uses "reference values" from one that likely won't match in another model. If the models are built from the same and has kept most or all of those similarities they will obviously be more alike. This can be a good thing though as it means that if 110% likeness isn't what you're after you might get a 98% likeness on another model that is "nicer".

Determining likeness is a whole other issue though as that's very subjective in human eyes. How well ppl see differences in faces comes greatly into play too. A easy test for that is to just look at all the "famous person" loras. If the creators of many of those thought they'd been "successful" you'd have to wonder wtf is going on.
There's loads of ppl that link to their trained stuff as examples of how great their training guides/instructions are and i've come across quite a few where it's meant to be a whole bunch of different RL ppl but they all got the same facial features and i'm extremely surprised the creators seemingly can't tell.

During all the "screwing around with AI images" i've done so far i've gotten increasingly sure i have a habit of spotting differences in face. It doesn't have to be that you can say like "oh that nose is 2mm too long", but more in the direction "hmmm, something seems off with that <feature>".
This is also more than likely why i've got a hard finishing trainings as there always seems to be "something wrong". Considering most of it has been "fake" ppl where a small variation wouldn't matter at all, it's a very odd (and bad) thing to get hung up on...
 
  • Like
Reactions: Sepheyer

felldude

Active Member
Aug 26, 2017
572
1,691
Determining likeness is a whole other issue though as that's very subjective in human eyes. How well ppl see differences in faces comes greatly into play too. A easy test for that is to just look at all the "famous person" loras. If the creators of many of those thought they'd been "successful" you'd have to wonder wtf is going on.
There's loads of ppl that link to their trained stuff as examples of how great their training guides/instructions are and i've come across quite a few where it's meant to be a whole bunch of different RL ppl but they all got the same facial features and i'm extremely surprised the creators seemingly can't tell.
Most celebrity face data is well trained already into checkpoints. And its against the rules to train a lora with body data, of course no creator would do that. :)

Regarding your in depth dive on bits&bytes it would make sense as I noticed most of my old Full FP 16 trainings are not as good now but the less accurate BF16 trainings turn out better.

I was reading one of the papers on ADAM and the fact they still can't explain 100% why the accuracy improves at a certain rate has to really be bothering some math mathematicians out their.
 
Last edited:

me3

Member
Dec 31, 2016
316
708
Regarding your in depth dive on bits&bytes it would make sense as I noticed most of my old Full FP 16 trainings are not as good now but the less accurate BF16 trainings turn out better.
I initially ran trainings with bitsandbytes 35, when i had something that was "stable" i updated to 41.1 and ran the same again. While the result wasn't exactly the same it would easily be within what you'd expect. That being said there are some problems and it's not just limited to external code but "something" within the trainer code too. Actually what i'm less sure of, there's too many variables and it takes far too long for me to run each "test" so i can't do it any real and useful way unfortunately.

One thing i noticed, i can't use bf16 but i tried full f16, it "worked", in the sense it trained and gave somewhat expected results, if it's better or not i can't really say, but using it with prodigy broke completely. It did run the training, but using the lora had no effect on generation. Unchecking that and it trained fine, so there's some kind of conflict there. If it's because i'm having to use older code i don't know, old code with newer libs can cause it's own problems.
 

felldude

Active Member
Aug 26, 2017
572
1,691
I initially ran trainings with bitsandbytes 35, when i had something that was "stable" i updated to 41.1 and ran the same again. While the result wasn't exactly the same it would easily be within what you'd expect. That being said there are some problems and it's not just limited to external code but "something" within the trainer code too. Actually what i'm less sure of, there's too many variables and it takes far too long for me to run each "test" so i can't do it any real and useful way unfortunately.

One thing i noticed, i can't use bf16 but i tried full f16, it "worked", in the sense it trained and gave somewhat expected results, if it's better or not i can't really say, but using it with prodigy broke completely. It did run the training, but using the lora had no effect on generation. Unchecking that and it trained fine, so there's some kind of conflict there. If it's because i'm having to use older code i don't know, old code with newer libs can cause it's own problems.
Have you tried " pip install --upgrade --force-reinstall -r requirements.txt "

Its possible you have a bad/old package that keeps reinstalling
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,566
3,746
Have you tried " pip install --upgrade --force-reinstall -r requirements.txt "

Its possible you have a bad/old package that keeps reinstalling
Just as an addendum, there's also the matter of cache cleaning. Given the likes the message that recommended cleaning the cache received I trust it applies in quite a few cases:
Code:
pip uninstall torch
pip cache purge
pip install torch -f https://download.pytorch.org/whl/torch_stable.html
 

felldude

Active Member
Aug 26, 2017
572
1,691
I'll just add that even with a clean install FP16 and Full FP16 seem to have been broken since the XL update.

I did the same training set with FP16 vs BF16 and the FP16 set learned nothing.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,566
3,746
I'll be attempting a bodypaint LORA. Will be grateful for any suggestions what the training settings should be.

First two attempts went tits up.

I will be posting dataset that reflects what I am trying to achieve. I'll probably run the entire dataset up to 150 images including close ups as well as full shots.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,566
3,746
FML I just now realize these are 1152x1152 and I was training them in 512x512. Genius.
---
Medium shots: 001-024
 
Last edited:
  • Like
Reactions: AllStep