[Stable Diffusion] Prompt Sharing and Learning Thread

me3

Member
Dec 31, 2016
316
708
I have come across a few image like this on Civitai, while it has detected certain lora used but I don't see the usual lora format <lora_name: strength> in the prompt, how is that possible?
You don't have permission to view the spoiler content. Log in or register now.
View attachment 3208137
Comfyui doesn't include lora in the prompt it's loaded in separate nodes. I believe there's still subsection in a1111 that lets you select loras and set their weights directly in a set of dropdowns and input fields as well.
 
  • Like
Reactions: namhoang909

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Fixing hands etc.

Something I have experimented with is to use after detailer in inpaint and only one hand at a time. I first mask the hand as best I can and only have hand_yolov8n.pt activated in adetailer then I generate and with each attempt I play with the settings until I get a good result. You have many parameters to adjust. What seems to do the most difference is the "normal" denoising strength and the inpaint denoising strength in adetailer don't go past 0.4 seems to be the rule of thumb in either. I switch between using "only masked" and "whole picture", I also switch between using the og seed and random. What works best seems to be scenario or case dependent. You can also use a specific prompt for only hands while inpainting. Steps and resolution is something I have not experimented with very much but could be relevant.

For demonstration purposes I generated a simple image with the LCM extension.
You don't have permission to view the spoiler content. Log in or register now.
6a6fd5b1-7e70-4690-b42a-e2410493f8f4-624569440.png
As you can see the left hand needs some fixing, not the worst case obviously but a good example nonetheless.
When inpainting there are a few helpful quick commands you can see if you hover the mouse over "i" in the image box.
Inpaint quick comands.png
I always zoom in to make it easier to mask the hand and typically I mask the entire hand from the wrist. If you have a mouse that has dpi settings change it for a slower input, this also makes it easier ofc.
Inpainting hands.png
You don't have permission to view the spoiler content. Log in or register now.
I start out with 0.1 denoising and for adetailer inpaint denosing I typically use 0.38 . Only masked and the og seed was used.
Inpainting hands 2.png
We could be happy with this or experiment a bit like I always do. I leave the normal denoising at 0.1 and lower the adetailer inpaint denoising to 0.28 .
Inpainting hands 3.png
The hand is slightly sharper but the fingernail on the middle finger is now a little deformed.
It's always worth experimenting but in this case the previous result was better.
Ok so lets finish off this image. I do the same process in inpainting for the face with adetailer mediapipe_face_mesh_eyes_only, adetailer inpainting denoising 0.22 and lips with adetailer inpaint denoising 0.3 and 0.01 for "normal" denoising and I'm also using GFPGAN postprocessing. I didn't add any prompt for the lips in adetailer.
I'm doing this to improve the face,lips and eyes and get better details before upscaling. Why I did it in inpaint is so I will not lose the progress of the hands.
Inpainting refining face.png
So lets upscale. I'm using SD Upscale in the script menu with 4x_NMKD-Siax_200k upscaler, only 2x for this demo and with 0.01 denoising. This is because I'm using LCM models and it seems that all the numbers is decreased greatly even the denoising strength for a good result. It's important to not use postprocessing when upscaling because it will mess with the face too much and undo our progress. You can use adetailer but don't use it for hands, only eyes and maybe lips because again it will undo the progress with the hands otherwise.

The final result:
00014-624569440.png

I hope this was helpful and if you have any questions or comments I will try to answer or if someone else wants to chime in, it's ofc welcome.

PS.
If you are wondering what the fudgestick is LCM? Latent Consistency Model and it's the latest "rage" or trend. The point of it is to use much less steps and lower cfg scale to decrease generation times but still get a good result.
There is "Turbo" for SDXL models and LCM as well.
Here's an article about it on civit if you want to learn more.
 
Last edited:

me3

Member
Dec 31, 2016
316
708
....
PS.
If you are wondering what the fudgestick is LCM? Latent Consistency Model and it's the latest "rage" or trend. The point of it is to use much less steps and lower cfg scale to decrease generation times but still get a good result.
There is "Turbo" for SDXL models and LCM as well.
Here's an article about it on civit if you want to learn more.
There's lora versions for turbo too and some are made in versions aimed at specific samplers and they work fairly well with most models i've tested it with. Seems more "trainers" are including one or both of these methods in their models as well so if ppl are updating those it's worth checking as it can screw with out results if you're running at old cfg/steps.

There's something called DPO as well, from what i gathered from glancing at some stuff, it's meant to follow prompts more "accurately". If it actually works or is any improvement i can't say, not had a chance to test it yet
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
There's lora versions for turbo too and some are made in versions aimed at specific samplers and they work fairly well with most models i've tested it with. Seems more "trainers" are including one or both of these methods in their models as well so if ppl are updating those it's worth checking as it can screw with out results if you're running at old cfg/steps.

There's something called DPO as well, from what i gathered from glancing at some stuff, it's meant to follow prompts more "accurately". If it actually works or is any improvement i can't say, not had a chance to test it yet
There is a LCM sampler, it is a bit muddy atm how you get this though. According to the article you get it when you install animatediff. I have the sampler but I'm not sure when or how I got it. I had installed animatediff while experimenting with gif and vid making and then I also got the LCM extension. It was after this I discovered I had a new sampler. It works much better for any additional LCM model you download other than what is included with the extension. It's worth getting the extension though for very fast simple image generation. It comes with a special integrated version of dreamshaper 7 specific for the extension. The settings and prompt is very limited but it's for small and simple images only and you can make batches of 100 images. It also has img2img and vid2vid.
 
Last edited:
  • Like
Reactions: Sepheyer

hotnloaded

Newbie
Nov 29, 2022
64
2,828
This is a broad question and maybe this has been answered before, but what are tips for writing prompts that involve sex positions?
I can do single woman poses with no problems but when ever I do a prompt with a Lora pose, it turns into a nightmare image.

Sometimes I would even copy prompts word for word of another picture and get the same nightmare results.


Any idea how I should approach this?
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
This is a broad question and maybe this has been answered before, but what are tips for writing prompts that involve sex positions?
I can do single woman poses with no problems but when ever I do a prompt with a Lora pose, it turns into a nightmare image.

Sometimes I would even copy prompts word for word of another picture and get the same nightmare results.


Any idea how I should approach this?
The issue is not the prompting. Any interaction between subjects is very problematic with stable diffusion, it seems to be the interference between objects and between persons that is the problem. What is the difference between two different surfaces touching and one single object?.. Loras and controlnet etc is likely the solution.
 
Last edited:

hotnloaded

Newbie
Nov 29, 2022
64
2,828
The issue is not the prompting. Any interaction between subjects is very problematic with stable diffusion, it seems to be the interference between objects and between persons that is the problem. What is the difference between two different surfaces touching and one single object?.. Loras and controlnet etc is likely the solution.
I've been using loras but not controlnet. So, I am guessing all those AI sex images (blowjob, reverse cowgirl, etc...) were done using controlnet?

I guess I was just hoping it was going to be a lot more straightforward lol
 

theMickey_

Engaged Member
Mar 19, 2020
2,193
2,824
I'd try ControlNet as well instead of LoRAs -- because the few times I've tried to use LoRAs for a pose, it always kinda "bleed" too much into the original checkpoint/model I was using and making the picture worse if that makes sense.

The way I'd do it is to find a decent reference picture and use ControlNet and/or "MultiArea Conditioning" if you want to have multiple characters in your picture that are not directly "connected" to or touching each other. Probably won't work with most sex poses though...

1703773742396.png
 

hotnloaded

Newbie
Nov 29, 2022
64
2,828
I'd try ControlNet as well instead of LoRAs -- because the few times I've tried to use LoRAs for a pose, it always kinda "bleed" too much into the original checkpoint/model I was using and making the picture worse if that makes sense.

The way I'd do it is to find a decent reference picture and use ControlNet and/or "MultiArea Conditioning" if you want to have multiple characters in your picture that are not directly "connected" to or touching each other. Probably won't work with most sex poses though...

I might just try to make poses in Daz and make it work with Controlnet. Thanks for the info.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
Bros, anyone had this issue with Kohya where a subprocess can't be launched? I just had Kohya reinstalled cause the Windows had to do a factory reset, and now I am getting an error when trying to caption the training images and when trying to train LORA. Hoping someone could save me a massive amount of time :)

kohya.png
 

theMickey_

Engaged Member
Mar 19, 2020
2,193
2,824
...anyone had this issue with Kohya where a subprocess can't be launched?
I've never used or even installed Kohya myself, but I did a quick research: the main error you're facing is this: ModuleNotFoundError: No module named 'tensorflow'. And according to an I found on the , this might be related to the version of Python you're using. So please check your Python version, it should be 3.10 according to the solution in the issue I linked before. If you're using a different version, the setup.bat might not be able to find a working tensorflow version for it and therefore won't install it.

The explicitly asks for version 3.10 of Python, too.

Hope that helps or at least give you a good starting point on what to check for.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
I've never used or even installed Kohya myself, but I did a quick research: the main error you're facing is this: ModuleNotFoundError: No module named 'tensorflow'. And according to an I found on the , this might be related to the version of Python you're using. So please check your Python version, it should be 3.10 according to the solution in the issue I linked before. If you're using a different version, the setup.bat might not be able to find a working tensorflow version for it and therefore won't install it.

The explicitly asks for version 3.10 of Python, too.

Hope that helps or at least give you a good starting point on what to check for.
And then she said: let's be friends, ok?
---
FML, I hate this python / torch version teasing and edging tho normally I never gro tired of it.

So, checked my python and it was 3.10 exactly. Kohya_ss asked for 3.10 but offered 3.10.9 as download. So, I grabbed that.

Then when getting ready to update ComfyUI I saw a new line among their installation instructions on ComfyUI's github page: "This is the command to install pytorch nightly instead which has a python 3.12 package and might have performance improvements:"

CUI's requirement for oldass torch which in turn required oldass python was always anal and was always a pain in the back holding back the whole python suit. And I go, whoa, the CUI guys finally decided to modernize their app, welcome to the future baby. So, I uninstalled 3.10.9 and installed 3.12.

Nice! CUI core ran aight, and it was time to install all those CUI subpackages.

Of course, the key component of many such packages was numba and it said something like: "Numba requires python < 3.11".

FML brothers.

PS. So, does Kohya_ss work with 3.12? Yeah no. Cause a subcomponent called onnxruntime craps the bed: "ERROR: No matching distribution found for onnxruntime". Looks like 3.12 is too cutting edge for onnx and they are prolly up to 3.11.98.97.97.99 still.

PPS Back to 3.10.9:
"Heyy"​
"Who dis?"​
"I miss you boo"​
"Wrong number"​
"Hey dont be like dat"​
 
Last edited:

SDAI-futa

Newbie
Dec 16, 2023
29
31
The issue is not the prompting. Any interaction between subjects is very problematic with stable diffusion, it seems to be the interference between objects and between persons that is the problem. What is the difference between two different surfaces touching and one single object?.. Loras and controlnet etc is likely the solution.
Use posenet and add a second person, and use region prompter to divide descriptions. I wish I had an example handy. Sex positions are hard and take a lot of patience.
 
  • Like
Reactions: Mr-Fox

me3

Member
Dec 31, 2016
316
708
Before stable diffusion - I always wanted my 3D Models to have a painted / artistic aesthetic to them. As if hand drawn.
I would of expected a solution to this be available through custom shaders but I was always disappointed with the results.

AI + Rotoscoping feels like the more likely technology to get there. Imagine extremely detailed hand draw art, animated and rendered at the speed of 3D - if that can be achieved it's almost the best of both worlds.

--

Before SD this was impossible.

Animation in old Disney movies always had extremely detailed backgrounds then simple flat shaded characters / animation because they had to draw them frame-by-frame. If a single frame takes an artist 3 weeks, you can't possibly achieve 24 frames per second and the likelihood of consistency falls dramatically as well.

This would be something AI could do (hopefully) that is essentially impossible today.
Stumbling around through some images i happened to notice , not had any chance to test it yet, but looking at the images it seems to have some variation in style outputs. No idea if it can help with what you're after or if it even works, but might be worth giving it a test.
 
  • Like
Reactions: Mr-Fox

theMickey_

Engaged Member
Mar 19, 2020
2,193
2,824
FML, I hate this python / torch version teasing and edging tho normally I never gro tired of it.
You could try to install different version of Python at the same time, and either just add your "main" (nightly, up-to-date) version to your PATH variable and change all scripts/batch files needing a different version to include the full path to the 2nd installation directory, or do something like the solution from the issue I've linked in my previous post suggested: rename python.exe to python310.exe for version 3.10 and add both installations to your PATH. Then you'll just need to replace "python.exe" with "python310.exe" in your Kohya scripts.

But to be fair: this sounds like it would make troubleshooting a lot worse in case something isn't working...
 
  • Red Heart
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
You could try to install different version of Python at the same time, and either just add your "main" (nightly, up-to-date) version to your PATH variable and change all scripts/batch files needing a different version to include the full path to the 2nd installation directory, or do something like the solution from the issue I've linked in my previous post suggested: rename python.exe to python310.exe for version 3.10 and add both installations to your PATH. Then you'll just need to replace "python.exe" with "python310.exe" in your Kohya scripts.

But to be fair: this sounds like it would make troubleshooting a lot worse in case something isn't working...
So, yeah, your thread here nailed it and solved the issue.

Thank you ser.
 
  • Yay, new update!
Reactions: theMickey_

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
Tryina figure out how to tackle bodypaint LORA dataset.

First attempt was using HoneySelect2 - overimposing dressed images onto naked images, averaging them out but alas. It produces bodyforming clothes but without the painted defects:
a_08525_.png

So, here's the second attempt - rendering naked ladies in CUI and then manually painting them up leaving huge gaps in cover. Oddly satisfying, probably same kinda high those guys who paint the Space Marines catch.


You don't have permission to view the spoiler content. Log in or register now.