[Stable Diffusion] Prompt Sharing and Learning Thread

deadshots2842

Member
Apr 30, 2023
144
265
It is a second step on each iteration to "temper" the result. I've heard it helps with CLIP data not getting extremely corrupted. Let say you had an apple in 50% of your lora images that was drawn at an angle for some reason. But you used WD14 and it tagged apple so the CLIP is strongly being trained on apple.

Maybe just use -apple in your tagger
Or if you have a lot of images 1000 or so and you notice 50-100 instance of apple compared to 999 1girls
maybe through in some regularization photos of an apple. Or 1girl holding an apple.
The issue would be you need regularization for all subjects.

Contrastive Language–Image Pre-training - CLIP

If you have ever had a lora that is garbled when the CLIP is used but works fine when it is disconnected that is likely from bad text encoder training.
And could be from over describing in the CLIP without regularization
It could also be from to high a Text Encoder Learn rate but that is less likely as most people reduce the TE

Their have been claims of teaching an art style with 10 photos of an apple in the style and 10 regularization images of a photo of an apple.
I haven't tried it but my intuition says it would just be a lora that could draw an apple




Likely but I would recommend at minimum knowing what version of Python you used to install (Conda, Windows App store Python, Python from Python.org) before you start deleting folders.

You can also look up where the cache is stored as it should be a few GB also
I think I figured it out got my storage back while everything still works.