I've done kohya_ss or the diffusers command lineSpeaking of finetuning how do you do the training like kohya_ss or One traininer ect to get that native finetuning.
I've done kohya_ss or the diffusers command lineSpeaking of finetuning how do you do the training like kohya_ss or One traininer ect to get that native finetuning.
If you did not use a VENV then the python files where copied directly into one of 20 versions of python you might be using.Also can someone tell me how I can uninstall everything I installed during kohya because it's giving me error. And I don't want my pc filled with useless file I don't use. Anyone?
Have you ever tried regularization for the training? I know that it prevent the style bleed effect but idk what kinda of image are valid for it.I've done kohya_ss or the diffusers command line
I found something called pip with about 7 gb of data is that it?If you did not use a VENV then the python files where copied directly into one of 20 versions of python you might be using.
You would also need to delete the cache folder which contains the "install files"
It is a second step on each iteration to "temper" the result. I've heard it helps with CLIP data not getting extremely corrupted. Let say you had an apple in 50% of your lora images that was drawn at an angle for some reason. But you used WD14 and it tagged apple so the CLIP is strongly being trained on apple.Have you ever tried regularization for the training? I know that it prevent the style bleed effect but idk what kinda of image are valid for it.
Likely but I would recommend at minimum knowing what version of Python you used to install (Conda, Windows App store Python, Python from Python.org) before you start deleting folders.I found something called pip with about 7 gb of data is that it?
I think I figured it out got my storage back while everything still works.It is a second step on each iteration to "temper" the result. I've heard it helps with CLIP data not getting extremely corrupted. Let say you had an apple in 50% of your lora images that was drawn at an angle for some reason. But you used WD14 and it tagged apple so the CLIP is strongly being trained on apple.
Maybe just use -apple in your tagger
Or if you have a lot of images 1000 or so and you notice 50-100 instance of apple compared to 999 1girls
maybe through in some regularization photos of an apple. Or 1girl holding an apple.
The issue would be you need regularization for all subjects.
Contrastive Language–Image Pre-training - CLIP
If you have ever had a lora that is garbled when the CLIP is used but works fine when it is disconnected that is likely from bad text encoder training.
And could be from over describing in the CLIP without regularization
It could also be from to high a Text Encoder Learn rate but that is less likely as most people reduce the TE
Their have been claims of teaching an art style with 10 photos of an apple in the style and 10 regularization images of a photo of an apple.
I haven't tried it but my intuition says it would just be a lora that could draw an apple
Likely but I would recommend at minimum knowing what version of Python you used to install (Conda, Windows App store Python, Python from Python.org) before you start deleting folders.
You can also look up where the cache is stored as it should be a few GB also
use inpaint to mask over the 2nd figure, then in your prompt describe the objects you want in the place, along with the stylistic or scene parts of your original prompt. in the inpaint negative prompt put "1girl, person, human"Hi all! What if, when creating a wide-format picture, two people create it instead of one? I would like to make pictures with one person for my desktop background. What are the options?
(I use SD 1.5 models, I specify “solo, alone, 1girl” in promt, and “multiple human, double human” in negative)
it takes too much time. I want to find a way to generate it with the desired result.use inpaint to mask over the 2nd figure, then in your prompt describe the objects you want in the place, along with the stylistic or scene parts of your original prompt. in the inpaint negative prompt put "1girl, person, human"
check Regional Prompt extension, gives you ways to divide your canvas and assign prompt pieces to every region, it may help.it takes too much time. I want to find a way to generate it with the desired result.
In my case, I generate a stream of images and some have to be deleted, since I will spend extra time on inpaint (stopping generation, manipulating the image, and generating again)
Maybe there are models in which this is fixed?
Anyone help?Since I can't make superwoman how I want I'm thinking of using Reactor extension. Can anyone tell me if it's worth trying and does it work on 18+ images like can I turn scarlet Johansson face with anyone?
What do you want to know? I've use Reactor in the past, and it worked pretty well. Just give it a shot!Anyone help?
thank you very much!check Regional Prompt extension, gives you ways to divide your canvas and assign prompt pieces to every region, it may help.
checkYou must be registered to see the links
I use it all the time, it works with 18+Anyone help?
Does it work with anime type? Maybe you can try this image and show me?I use it all the time, it works with 18+
It will depend on the photo you use as an input, if you use a realistic face it will add a realistic face. If you use an anime style face it should add an anime style face.Does it work with anime type? Maybe you can try this image and show me?