- Jan 24, 2020
- 1,401
- 3,802
I suspect that the puffy eye has to do with the source material the checkpoint and or lora etc has been trained on.Experiments with controlnet:
Before:
View attachment 2650511
After: (why do mine always look like they have tired eyebag eyes?)
View attachment 2650511
I would be obliged if you would stop taking pictures of my soon-to-be wife
Smoookin.. Awesome stuff.
Yes I agree, Daz is more consistent because you have direct control while SD is always a dice toss however SD is light years ahead in visuals and realism. Though using controllnet and openpose SD is catching up to DAZ in repeatability and consistency. Also with SD you are not forced to endure endless menus only to tweak one little thing..Sorry to interfere as a humble SD novice (and already baffled by all this body horror AI thing...), the first post (OP front page) is misleading regarding the way to implement LoRas in SD, I have lost two days trying to reconcile path problems and extension calls because of it. There is no need to use ui additional networks extension, Loras are directly supported (or git pull is your friend) and it is a breeze... . Thanks for all the effort however, it is interesting to educate oneself but I think DAZ is still far more efficient when you have precise Renpy needs.... (and we have those monster quasiNASA rigs....).
Can't wait for the day where "generative" part adds a few more loops to understand that you want SD to build a 3D chara off one single 2D image, then dress/undress her, then build LORA/whatnot around her and turn her into a proper callable object that can be plugged consistently into scenes created via a similar approach.Yes I agree, Daz is more consistent because you have direct control while SD is always a dice toss however SD is light years ahead in visuals and realism. Though using controllnet and openpose SD is catching up to DAZ in repeatability and consistency. Also with SD you are not forced to endure endless menus only to tweak one little thing..
you might want to hold off on updating, or the very least give it some thought depending on your setup and usage.a1111 have done a new update (v1.3.0).
in this update we have Cross attention optimization.
I've made a test with all of thisYou must be registered to see the links.
Setting's and time in post.
Are you using it? what are your favorite?
not sure how you are running things, but usually that happens when you're missing the launch optionswell here we go again. another error preventing me from doing anything. google told me nothing. nothing happens after clicking generate. google gave small info but the lines are no where to be found in the .py they said to edit
RuntimeError: expected scalar type Float but found Half
--no-half
or --no-half-vae
using these. is it right? was trying to speed things up a bitnot sure how you are running things, but usually that happens when you're missing the launch options--no-half
or--no-half-vae
using these. is it right? was trying to speed things up a bit
--xformers --opt-channelslast --disable-safe-unpickle --precision full --disable-nan-check --skip-torch-cuda-test --medvram --always-batch-cond-uncond --opt-split-attention-v1 --opt-sub-quad-attention --deepdanbooru --no-half-vae
--xformers
--opt-split-attention-v1
--opt-sub-quad-attention
adding --no-half fixed it .. thanks for other suggestionsi think those 3 can't work together as the code is set up with conditions making only one of them actually be applied.Code:--xformers --opt-split-attention-v1 --opt-sub-quad-attention
I'd probably go with xformers.
Since you say you're looking for "speed", if you don't need the --medvram you should remove it as that slow things down quite a bit and i think the --precision full increases vram usage so that kills a bit of the point for --medvram
Are you trying to train a Lora using kohya ss? If so, the checkpoint you are training on is very important.I was hoping it might help with some of the problems i've been having (that backfired...)
Been trying to train a person for probably over 1k hours now and i can't make sense of why it's behaving as it is.
To start at the "easy" end, in the beginning when testing the training stages i got either someone with clearly an asian origin or of an african one, both in features and skin tones...eventually the few cases of asian dropped out completely.
Problem is, the person is without any doubt, is white, even the fact that they have blue eyes should remove the option being much else, so can't see why it's happening.
Another problem is that the first 1-2 stages in the training picks up the bodyshape pretty perfectly, then beyond that stuff just get flattened down.
I've tried simple captions, tagged everything and just tagged specific things, it changes stuff but nothing seem to affect the ethnicity and body issues. The captions are read and even without that it should "work".
Having trained using other image sets pretty successfully, meaning you could easy tell it's the same people, i can't see why this is going so horribly wrong...
Suggestions are welcome