Run enough seeds you will see anus eyes, multiple anus, and vaginanusHuh. I've never seen a Vaginass before.....
Run enough seeds you will see anus eyes, multiple anus, and vaginanusHuh. I've never seen a Vaginass before.....
Bokeh?(FML what's the name of that gfx light effect, sounds like "bleech").
Awesome work. Thank you for the effort and sharing it.So i went through about 6 months of code changes for k_ss and the gui as that was apparently when the timeframe for the issues others had.
Going over changes in unfamiliar code isn't the most accurate thing, i did spot a couple of changes that's of a nature that could screw things up, but mostly the changes hadn't actually be directly for SD lora training. It'd mainly been for XL and merging/finetuning.
So i started testing differently old version and "patching" them to run, many wouldn't run or had conflicting deps.
Eventually i got a "version" to run with various updates and finger crossing, i later found out that others had found that version to be able to run as well...*sigh*....could have saved some time and effort there
I ran a training with just mainly default values and not reviewed captions to see if there was any difference and it seem to be rather noticeable.
View attachment 2992121
The cause of the problem seem to be linked to bitsandbytes, if it's just that or a combination of that and some other lib i can't say.
If you have issues training it might be worth setting up and old version and see if things work out better. Version 35.0.0 for bitsandbytes work at least, not sure if any of the updates after that does, not tested yet. Will update with the exact gui tag version when i got access to check as atm i can't remember exactly.
It's a bit odd that XL training wasn't affected, but that uses different learning rates etc which suggests the issue is linked to that. Images did look overtrained in some ways but drastically lowering learning rates didn't fix the issue, so there's something "more" to it.
Hopefully some of the people more familiar with the involved code can figure it out.
Do you have any thoughts about how the checkpoint that the LORA is trained on contributes to the end result?So having worked out some of all the software issues, i've gotten back to trying to work out all the other training issues, but tbh having had training unknowingly broken since april and still constantly trying to get it to work thinking it was the data or settings at fault, i'm kinda fed up with it :/
View attachment 3000873
Grid above is the same as top row in this, just different seed
View attachment 3000833
These images are in no way cherrypicked, out of the simple fact the prompt is purely "a woman <lora>", there's no trigger word, i simply forgot to remove that in the comment so ignore that. As the zip filename suggests it's trained onYou must be registered to see the links.
I did repeated attempts at training on cyberrealistic, elegance and dreamshaper, all of which i've trained other things on just fine, but they refused to work. All of them created better images in this grid than when trained on. Some of the test images seem to have been generated more towards realistic than "render" which is why they are slightly off, but that's because of no neg prompt etc, so shouldn't be too hard to fix there.
I've not tested this for flexibility or anything, but it generates an actual face which is massive progress from before. There's probably issues and some thing seem to be a bit seed dependent, but i guess we're used to that.
All 7 loras are inYou must be registered to see the links
(Edited to add some pointless numbers)
Clearing out files/folders after all this, I deleted over 400 "settings files", but there would have been a lot more cause the version of kohya i have to use doesn't automatically save. ~16000 "test" images from testing the loras (>6gb) (not counting grids) and ~835gb of "discarded" loras...
I know i've cleared out loras and images before but not how many, still i suspect this covers the majority, which is kinda scary :/
Most celebrity face data is well trained already into checkpoints. And its against the rules to train a lora with body data, of course no creator would do that.Determining likeness is a whole other issue though as that's very subjective in human eyes. How well ppl see differences in faces comes greatly into play too. A easy test for that is to just look at all the "famous person" loras. If the creators of many of those thought they'd been "successful" you'd have to wonder wtf is going on.
There's loads of ppl that link to their trained stuff as examples of how great their training guides/instructions are and i've come across quite a few where it's meant to be a whole bunch of different RL ppl but they all got the same facial features and i'm extremely surprised the creators seemingly can't tell.
I initially ran trainings with bitsandbytes 35, when i had something that was "stable" i updated to 41.1 and ran the same again. While the result wasn't exactly the same it would easily be within what you'd expect. That being said there are some problems and it's not just limited to external code but "something" within the trainer code too. Actually what i'm less sure of, there's too many variables and it takes far too long for me to run each "test" so i can't do it any real and useful way unfortunately.Regarding your in depth dive on bits&bytes it would make sense as I noticed most of my old Full FP 16 trainings are not as good now but the less accurate BF16 trainings turn out better.
Have you tried " pip install --upgrade --force-reinstall -r requirements.txt "I initially ran trainings with bitsandbytes 35, when i had something that was "stable" i updated to 41.1 and ran the same again. While the result wasn't exactly the same it would easily be within what you'd expect. That being said there are some problems and it's not just limited to external code but "something" within the trainer code too. Actually what i'm less sure of, there's too many variables and it takes far too long for me to run each "test" so i can't do it any real and useful way unfortunately.
One thing i noticed, i can't use bf16 but i tried full f16, it "worked", in the sense it trained and gave somewhat expected results, if it's better or not i can't really say, but using it with prodigy broke completely. It did run the training, but using the lora had no effect on generation. Unchecking that and it trained fine, so there's some kind of conflict there. If it's because i'm having to use older code i don't know, old code with newer libs can cause it's own problems.
Just as an addendum, there's also the matter of cache cleaning. Given the likes the message that recommended cleaning the cache received I trust it applies in quite a few cases:Have you tried " pip install --upgrade --force-reinstall -r requirements.txt "
Its possible you have a bad/old package that keeps reinstalling
pip uninstall torch
pip cache purge
pip install torch -f https://download.pytorch.org/whl/torch_stable.html