[Stable Diffusion] Prompt Sharing and Learning Thread

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Don't have time right atm, but will try later. As quick run down:
- <tbd>
- horrible, it was just blip without beams as that is still broken for me, yay
- settings is exactly the same as i used in the lora i posted a while ago, constant, 0.0001 lr, 128/64 network, adamw 8bit, cyber_3.3, can't remember there being much else settings atm
- prompt for generation was basically just "woman dressed for jungle exploration", AI clearly think you don't need too much clothing, probably due to heat and you see she looks drenched in sweat :p
- "nothing", coloring the AI picked up itself, it really likes using it for clothing, rest is more then likely to not having captioned anything about the clothing. captions are so basic they mentioned next to nothing besides, woman, <optional pose>, <optional location>. Lack of beams with blip is horrible.
You can always use the Interrogate functions in A1111 img2img, it's much slower ofc because you have to do one image at a time. When I start on a fresh prompt, sometimes I use an image with both clip and DeepBooru and then take the best from both, then I adjust and add my own stuff etc. Even though it's a big pain in da ass it's well worth taking the time to get really good captions when training a Lora, because good captions is very important to get a good result. According to what I have read, the training is more sensitive and less forgiving to bad captions than the quality of the images used for the training. Also if you are not careful a tag that is present often enough because of an item in the images that you are describing is consistent in many images, this can create unintentional trigger Words. I wasn't aware when I trained my Kendra Lora so "headband" became one of the trigger words.
In some images there was a fence in the background and it became trained into the Lora, so sometimes I need to add it to negative prompt with a strong weight to avoid it in the generated images. In order to get good sample result in the training you have to describe what is in the image and there are only so many ways you can word it, so then it can't be helped. You just need to do your best and cross your fingers. Sometimes..
 
  • Like
Reactions: Sepheyer

devilkkw

Member
Mar 17, 2021
303
1,034
have you updated a1111 to 1.6.0?
after update i'm unable to load any model, keep OOM error. so i switched back to 1.5.2 but i've modified sampler to keep dpm 3.
anyone had some issue on 1.6.0?
i not tested clean install, maybe it solves problem, but for now i keep 1.5.2.
 

me3

Member
Dec 31, 2016
316
708
have you updated a1111 to 1.6.0?
after update i'm unable to load any model, keep OOM error. so i switched back to 1.5.2 but i've modified sampler to keep dpm 3.
anyone had some issue on 1.6.0?
i not tested clean install, maybe it solves problem, but for now i keep 1.5.2.
delete the venv folder, there's a bunch of conflicting dependencies etc so a "fresh" install of that fixes the issues in most cases, it's what i had to do for it, most of the installs will just use your cached files anyway so there isn't that much new downloading.
 
  • Like
Reactions: devilkkw and Mr-Fox

pazhentaigame

New Member
Jun 16, 2020
13
3
anyone Know how to do a step separate generate
sometime the img while it generating and the outcome is just too much difference
I just want that half way generation preview version
but interruption it will result in unfinished generate img instead
I don't even know it possible to get that version
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
anyone Know how to do a step separate generate
sometime the img while it generating and the outcome is just too much difference
I just want that half way generation preview version
but interruption it will result in unfinished generate img instead
I don't even know it possible to get that version
Use fewer steps. In the live preview settings you can decide how often the preview is updated, meaning how many steps in between updates. Lets say it is set to 10 steps and you are using 20 sample steps for generating images, then the preview that you like is somewhere between 10-20 steps. I would set the seed to static by copying the seed from that image that you liked the first preview. Then use xyz plot to test how many steps you should use to get it. Go to xyz plot in the scripts menu and select steps for x-axis.
Set it to 10-20 [11], now it will generate one image for each amount of steps (10,11,12..) in other words 11 images.
If you want fewer increments and thus fewer images simply set it to 10-20 [6] it will use the increments of 2 meaning 10,12,14 etc and generate 6 images.
Ok now that you found the amount of steps you can use xyz plot to try out the cfg scale in similair way, I would recommend to have your prompt mostly finalized before testing cfg scale. In xyz plot select cfg and set it to 4-12 [5] for increments of 2 and [9] for increments of 1.
 
Last edited:

devilkkw

Member
Mar 17, 2021
303
1,034
delete the venv folder, there's a bunch of conflicting dependencies etc so a "fresh" install of that fixes the issues in most cases, it's what i had to do for it, most of the installs will just use your cached files anyway so there isn't that much new downloading.
made a full clean install, worked. but loading 9gb model keep getting OOM, in 1.5.2 i don't get any error.
It's strange, need inspecting more, i think some setting's need to be adjust. And wait next update maybe something wrong.
The 1.6.0 have some implementation for better working with XL model, memory management is different. But why loading 1.5 model upper to 4Gb get OOM now is a mystery.
 
  • Like
Reactions: Mr-Fox

me3

Member
Dec 31, 2016
316
708
made a full clean install, worked. but loading 9gb model keep getting OOM, in 1.5.2 i don't get any error.
It's strange, need inspecting more, i think some setting's need to be adjust. And wait next update maybe something wrong.
The 1.6.0 have some implementation for better working with XL model, memory management is different. But why loading 1.5 model upper to 4Gb get OOM now is a mystery.
1.6.0 moved alot of the launcher settings from the launched to the UI, low/med vram is still in the launcher but xformers/sdp and other optimising settings are in the UI itself and some/all of those settings no longer work unless set in the UI.
They even moved a thing like the face restore option, (not found a way to restore it to its old place yet) which gets really annoying in the case where you have to keep turn it it on and off
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
1.6.0 moved alot of the launcher settings from the launched to the UI, low/med vram is still in the launcher but xformers/sdp and other optimising settings are in the UI itself and some/all of those settings no longer work unless set in the UI.
They even moved a thing like the face restore option, (not found a way to restore it to its old place yet) which gets really annoying in the case where you have to keep turn it it on and off
You can add a face restore check box in the ui settings/quick settings, select face_restoration,
I also have face_restoration_model wich is convenient for switching between CodeFormer and GFPGAN.
 
  • Like
Reactions: Sepheyer

rogue_69

Newbie
Nov 9, 2021
79
245
I'm trying to get Stable to create a face image without eyelashes. It's the one thing keeping me from getting the perfect face texture for a Daz Face Transfer (it draws the eyelashes on the eyelids). Even when I use Img2Img with a really low denoiser, it still adds the eyelashes. I've tried to use eyelashes in negative prompts. Any suggestions?
 

me3

Member
Dec 31, 2016
316
708
I'm trying to get Stable to create a face image without eyelashes. It's the one thing keeping me from getting the perfect face texture for a Daz Face Transfer (it draws the eyelashes on the eyelids). Even when I use Img2Img with a really low denoiser, it still adds the eyelashes. I've tried to use eyelashes in negative prompts. Any suggestions?
considering eyelashes is a very natural and common thing for a face to have, you might need to remove tags that are linked to having "normal features", tag such as "disfigured" and/or "mutilated" (and similar) in negative might be fighting/preventing it. Even a simple "ugly" might be causing it.
 
  • Like
Reactions: Mr-Fox

me3

Member
Dec 31, 2016
316
708
Can't get rid of tights when use controlnet on
I try on negative or no on positive
it's still coming back like 9/10 img
what are you using controlnet for, what "image" is being used for it.
Does the image contain the tights or are they showing up just because of generating. If they already are in the image used for controlnet you might just need to find another image with that pose/scene. Reducing the importance of controlnet and making prompt more important could work as well. There should be an option for which is given "priority", 3 option along the lines of balanced, controlnet or prompt
 
  • Like
Reactions: Mr-Fox and onyx

rogue_69

Newbie
Nov 9, 2021
79
245
considering eyelashes is a very natural and common thing for a face to have, you might need to remove tags that are linked to having "normal features", tag such as "disfigured" and/or "mutilated" (and similar) in negative might be fighting/preventing it. Even a simple "ugly" might be causing it.
As is always the case with me, I figured out a solution almost immediately after posting a question. I just used Inpaint instead of normal Img2Img. I blacked out the eyes and eyelids, and generated everything else besides those.
 
  • Like
Reactions: Mr-Fox

Jimwalrus

Active Member
Sep 15, 2021
890
3,287
what are you using controlnet for, what "image" is being used for it.
Does the image contain the tights or are they showing up just because of generating. If they already are in the image used for controlnet you might just need to find another image with that pose/scene. Reducing the importance of controlnet and making prompt more important could work as well. There should be an option for which is given "priority", 3 option along the lines of balanced, controlnet or prompt
Also, try using the term "pantyhose" rather than "tights" - generally speaking use American English / terms rather than British English / terms
 
  • Like
Reactions: Mr-Fox

pazhentaigame

New Member
Jun 16, 2020
13
3
what are you using controlnet for, what "image" is being used for it.
Does the image contain the tights or are they showing up just because of generating. If they already are in the image used for controlnet you might just need to find another image with that pose/scene. Reducing the importance of controlnet and making prompt more important could work as well. There should be an option for which is given "priority", 3 option along the lines of balanced, controlnet or prompt
every image... nude leg to just me drawing from scratch
every single option of controlnet depth line art etc
every checkpoint like 10+ of them that I experiment
the tights just always popup sometimes somehow
ahh... sorry, I would never bring any question on
My work just too much complicate like create porn comic
and it has some specific cloth design in it
so there are too many factors to be the problem
thanks anyway
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
every image... nude leg to just me drawing from scratch
every single option of controlnet depth line art etc
every checkpoint like 10+ of them that I experiment
the tights just always popup sometimes somehow
ahh... sorry, I would never bring any question on
My work just too much complicate like create porn comic
and it has some specific cloth design in it
so there are too many factors to be the problem
thanks anyway
Don't forget to try adding weight to the negative prompt. Example: (tights:1.5) .
Are you using any Lora or embedding? That could be the problem. Even if you try many checkpoint models, if all are geared towards anime for instance then it's likely to come from the checkpoint because it's so common in anime. If you want the legs to be bare, use this in the positive prompt. In the negative use "leg wear" . You can also work with the color, if it's always white tights use "white legs" etc.