[Stable Diffusion] Prompt Sharing and Learning Thread

deadshots2842

Member
Apr 30, 2023
188
290
Also can someone tell me how I can uninstall everything I installed during kohya because it's giving me error. And I don't want my pc filled with useless file I don't use. Anyone?
 

felldude

Active Member
Aug 26, 2017
572
1,695
Also can someone tell me how I can uninstall everything I installed during kohya because it's giving me error. And I don't want my pc filled with useless file I don't use. Anyone?
If you did not use a VENV then the python files where copied directly into one of 20 versions of python you might be using.
You would also need to delete the cache folder which contains the "install files"
 

felldude

Active Member
Aug 26, 2017
572
1,695
Anyone else find euler to draw better hands then dpmpp_2m or 3m (When doing img to img up scaling esp)

ComfyUI_00075_.png
 

rogue_69

Newbie
Nov 9, 2021
87
298
I've been using KREA AI for the last few months, and they finally released their TOS, which has a no "Lewd/Pornographic" clause in it. Looks like I'm going to have to learn to use ComfyUI. Where would be the best place to start to learn? Are there any good starter tutorials someone can suggest?
 

deadshots2842

Member
Apr 30, 2023
188
290
If you did not use a VENV then the python files where copied directly into one of 20 versions of python you might be using.
You would also need to delete the cache folder which contains the "install files"
I found something called pip with about 7 gb of data is that it?
 

felldude

Active Member
Aug 26, 2017
572
1,695
Have you ever tried regularization for the training? I know that it prevent the style bleed effect but idk what kinda of image are valid for it.
It is a second step on each iteration to "temper" the result. I've heard it helps with CLIP data not getting extremely corrupted. Let say you had an apple in 50% of your lora images that was drawn at an angle for some reason. But you used WD14 and it tagged apple so the CLIP is strongly being trained on apple.

Maybe just use -apple in your tagger
Or if you have a lot of images 1000 or so and you notice 50-100 instance of apple compared to 999 1girls
maybe through in some regularization photos of an apple. Or 1girl holding an apple.
The issue would be you need regularization for all subjects.

Contrastive Language–Image Pre-training - CLIP

If you have ever had a lora that is garbled when the CLIP is used but works fine when it is disconnected that is likely from bad text encoder training.
And could be from over describing in the CLIP without regularization
It could also be from to high a Text Encoder Learn rate but that is less likely as most people reduce the TE

Their have been claims of teaching an art style with 10 photos of an apple in the style and 10 regularization images of a photo of an apple.
I haven't tried it but my intuition says it would just be a lora that could draw an apple


I found something called pip with about 7 gb of data is that it?
Likely but I would recommend at minimum knowing what version of Python you used to install (Conda, Windows App store Python, Python from Python.org) before you start deleting folders.

You can also look up where the cache is stored as it should be a few GB also
 
Last edited:

deadshots2842

Member
Apr 30, 2023
188
290
Since I can't make superwoman how I want I'm thinking of using Reactor extension. Can anyone tell me if it's worth trying and does it work on 18+ images like can I turn scarlet Johansson face with anyone?
 

deadshots2842

Member
Apr 30, 2023
188
290
It is a second step on each iteration to "temper" the result. I've heard it helps with CLIP data not getting extremely corrupted. Let say you had an apple in 50% of your lora images that was drawn at an angle for some reason. But you used WD14 and it tagged apple so the CLIP is strongly being trained on apple.

Maybe just use -apple in your tagger
Or if you have a lot of images 1000 or so and you notice 50-100 instance of apple compared to 999 1girls
maybe through in some regularization photos of an apple. Or 1girl holding an apple.
The issue would be you need regularization for all subjects.

Contrastive Language–Image Pre-training - CLIP

If you have ever had a lora that is garbled when the CLIP is used but works fine when it is disconnected that is likely from bad text encoder training.
And could be from over describing in the CLIP without regularization
It could also be from to high a Text Encoder Learn rate but that is less likely as most people reduce the TE

Their have been claims of teaching an art style with 10 photos of an apple in the style and 10 regularization images of a photo of an apple.
I haven't tried it but my intuition says it would just be a lora that could draw an apple




Likely but I would recommend at minimum knowing what version of Python you used to install (Conda, Windows App store Python, Python from Python.org) before you start deleting folders.

You can also look up where the cache is stored as it should be a few GB also
I think I figured it out got my storage back while everything still works.
 

moodorama

New Member
Mar 13, 2021
3
1
Hi all! What if, when creating a wide-format picture, two people create it instead of one? I would like to make pictures with one person for my desktop background. What are the options?

(I use SD 1.5 models, I specify “solo, alone, 1girl” in promt, and “multiple human, double human” in negative)
 

osanaiko

Engaged Member
Modder
Jul 4, 2017
2,477
4,475
Hi all! What if, when creating a wide-format picture, two people create it instead of one? I would like to make pictures with one person for my desktop background. What are the options?

(I use SD 1.5 models, I specify “solo, alone, 1girl” in promt, and “multiple human, double human” in negative)
use inpaint to mask over the 2nd figure, then in your prompt describe the objects you want in the place, along with the stylistic or scene parts of your original prompt. in the inpaint negative prompt put "1girl, person, human"
 

moodorama

New Member
Mar 13, 2021
3
1
use inpaint to mask over the 2nd figure, then in your prompt describe the objects you want in the place, along with the stylistic or scene parts of your original prompt. in the inpaint negative prompt put "1girl, person, human"
it takes too much time. I want to find a way to generate it with the desired result.
In my case, I generate a stream of images and some have to be deleted, since I will spend extra time on inpaint (stopping generation, manipulating the image, and generating again)

Maybe there are models in which this is fixed?
 
  • Haha
Reactions: osanaiko

EvylEve

Newbie
Apr 5, 2021
31
56
it takes too much time. I want to find a way to generate it with the desired result.
In my case, I generate a stream of images and some have to be deleted, since I will spend extra time on inpaint (stopping generation, manipulating the image, and generating again)

Maybe there are models in which this is fixed?
check Regional Prompt extension, gives you ways to divide your canvas and assign prompt pieces to every region, it may help.
check
 

Synalon

Member
Jan 31, 2022
225
663
Does it work with anime type? Maybe you can try this image and show me?
It will depend on the photo you use as an input, if you use a realistic face it will add a realistic face. If you use an anime style face it should add an anime style face.