[Stable Diffusion] Prompt Sharing and Learning Thread

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
Bros, anyone had this issue with Kohya where a subprocess can't be launched? I just had Kohya reinstalled cause the Windows had to do a factory reset, and now I am getting an error when trying to caption the training images and when trying to train LORA. Hoping someone could save me a massive amount of time :)

kohya.png
 

theMickey_

Engaged Member
Mar 19, 2020
2,107
2,646
...anyone had this issue with Kohya where a subprocess can't be launched?
I've never used or even installed Kohya myself, but I did a quick research: the main error you're facing is this: ModuleNotFoundError: No module named 'tensorflow'. And according to an I found on the , this might be related to the version of Python you're using. So please check your Python version, it should be 3.10 according to the solution in the issue I linked before. If you're using a different version, the setup.bat might not be able to find a working tensorflow version for it and therefore won't install it.

The explicitly asks for version 3.10 of Python, too.

Hope that helps or at least give you a good starting point on what to check for.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
I've never used or even installed Kohya myself, but I did a quick research: the main error you're facing is this: ModuleNotFoundError: No module named 'tensorflow'. And according to an I found on the , this might be related to the version of Python you're using. So please check your Python version, it should be 3.10 according to the solution in the issue I linked before. If you're using a different version, the setup.bat might not be able to find a working tensorflow version for it and therefore won't install it.

The explicitly asks for version 3.10 of Python, too.

Hope that helps or at least give you a good starting point on what to check for.
And then she said: let's be friends, ok?
---
FML, I hate this python / torch version teasing and edging tho normally I never gro tired of it.

So, checked my python and it was 3.10 exactly. Kohya_ss asked for 3.10 but offered 3.10.9 as download. So, I grabbed that.

Then when getting ready to update ComfyUI I saw a new line among their installation instructions on ComfyUI's github page: "This is the command to install pytorch nightly instead which has a python 3.12 package and might have performance improvements:"

CUI's requirement for oldass torch which in turn required oldass python was always anal and was always a pain in the back holding back the whole python suit. And I go, whoa, the CUI guys finally decided to modernize their app, welcome to the future baby. So, I uninstalled 3.10.9 and installed 3.12.

Nice! CUI core ran aight, and it was time to install all those CUI subpackages.

Of course, the key component of many such packages was numba and it said something like: "Numba requires python < 3.11".

FML brothers.

PS. So, does Kohya_ss work with 3.12? Yeah no. Cause a subcomponent called onnxruntime craps the bed: "ERROR: No matching distribution found for onnxruntime". Looks like 3.12 is too cutting edge for onnx and they are prolly up to 3.11.98.97.97.99 still.

PPS Back to 3.10.9:
"Heyy"​
"Who dis?"​
"I miss you boo"​
"Wrong number"​
"Hey dont be like dat"​
 
Last edited:

SDAI-futa

Newbie
Dec 16, 2023
28
30
The issue is not the prompting. Any interaction between subjects is very problematic with stable diffusion, it seems to be the interference between objects and between persons that is the problem. What is the difference between two different surfaces touching and one single object?.. Loras and controlnet etc is likely the solution.
Use posenet and add a second person, and use region prompter to divide descriptions. I wish I had an example handy. Sex positions are hard and take a lot of patience.
 
  • Like
Reactions: Mr-Fox

me3

Member
Dec 31, 2016
316
708
Before stable diffusion - I always wanted my 3D Models to have a painted / artistic aesthetic to them. As if hand drawn.
I would of expected a solution to this be available through custom shaders but I was always disappointed with the results.

AI + Rotoscoping feels like the more likely technology to get there. Imagine extremely detailed hand draw art, animated and rendered at the speed of 3D - if that can be achieved it's almost the best of both worlds.

--

Before SD this was impossible.

Animation in old Disney movies always had extremely detailed backgrounds then simple flat shaded characters / animation because they had to draw them frame-by-frame. If a single frame takes an artist 3 weeks, you can't possibly achieve 24 frames per second and the likelihood of consistency falls dramatically as well.

This would be something AI could do (hopefully) that is essentially impossible today.
Stumbling around through some images i happened to notice , not had any chance to test it yet, but looking at the images it seems to have some variation in style outputs. No idea if it can help with what you're after or if it even works, but might be worth giving it a test.
 
  • Like
Reactions: Mr-Fox

theMickey_

Engaged Member
Mar 19, 2020
2,107
2,646
FML, I hate this python / torch version teasing and edging tho normally I never gro tired of it.
You could try to install different version of Python at the same time, and either just add your "main" (nightly, up-to-date) version to your PATH variable and change all scripts/batch files needing a different version to include the full path to the 2nd installation directory, or do something like the solution from the issue I've linked in my previous post suggested: rename python.exe to python310.exe for version 3.10 and add both installations to your PATH. Then you'll just need to replace "python.exe" with "python310.exe" in your Kohya scripts.

But to be fair: this sounds like it would make troubleshooting a lot worse in case something isn't working...
 
  • Red Heart
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
You could try to install different version of Python at the same time, and either just add your "main" (nightly, up-to-date) version to your PATH variable and change all scripts/batch files needing a different version to include the full path to the 2nd installation directory, or do something like the solution from the issue I've linked in my previous post suggested: rename python.exe to python310.exe for version 3.10 and add both installations to your PATH. Then you'll just need to replace "python.exe" with "python310.exe" in your Kohya scripts.

But to be fair: this sounds like it would make troubleshooting a lot worse in case something isn't working...
So, yeah, your thread here nailed it and solved the issue.

Thank you ser.
 
  • Yay, new update!
Reactions: theMickey_

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
Tryina figure out how to tackle bodypaint LORA dataset.

First attempt was using HoneySelect2 - overimposing dressed images onto naked images, averaging them out but alas. It produces bodyforming clothes but without the painted defects:
a_08525_.png

So, here's the second attempt - rendering naked ladies in CUI and then manually painting them up leaving huge gaps in cover. Oddly satisfying, probably same kinda high those guys who paint the Space Marines catch.


You don't have permission to view the spoiler content. Log in or register now.
 

picobyte

Active Member
Oct 20, 2017
639
687
predicament_01227_.png predicament_01223_.png predicament_01219_.png predicament_01209_.png predicament_01224_.png
As I also on Civitai, with other examples:
Using ComfyUI, worksheet in metadata. I believe this was the successful strategy to get the core string, Might also work in webui (untried), hitting core took me some time, I was naive, also maybe the strategy can be improved, but this was success at first try (this time):

  1. Start with sdxl checkpoint possibly + lora's of your subject (untested), I included only the LCM one.
  2. sampler to LCM
  3. cfg 5
  4. sgm_uniform,
  5. latent 512x512
  6. 7 steps
  7. no negative
  8. positive: start with your subject token up front, without comma's add words, interleaving the core concept with desired aspects of it but, really slowly, one or a few words at a time: check the output images, if output is not right, move words around, try synonyms, if it doesn't work use other words entirely, slowly build the prompt, there has to be a certain repetition where the core concept is elaborated, it's like you are expanding a string. At a certain point you'll find it doesn't really matter anymore. Then you have your core prompt
  9. use more text to complete the picture, keep it within 75 tokens for best results, use magicprompt for variety in scenes. commas count so better just not, use strong synonyms that have a double meaning either alone or in context.
  10. Place the masterwork, quality, in high quality resolution at the end, along with restrictions, in positive works better than in negative.
  11. then change to 1024x1024, cfg 2.3
Though you get best images if you keep your cfg low, it can actually still produce good images with cfg 5, although a lot have peculiarities. Maybe it is possible to leverage the cfg up even a lot more in 512x512, then maybe you can set it higher in 1024x1024 and force the sampler more to stay on subject while still producing good pictures, or lower the required steps, because that probably is what the cfg influences.

My core concept was a bit perverted (of course ;-), but well.. to each their own, I guess. the core concept was netorare and I found this as a core string: naughty cuckolding smutty netorare hetero exhibitionist girlfriend obviously cheating sexual date experiment, that is probably not even the only one, but one that did work.

Enjoy!
 
Last edited:

SDAI-futa

Newbie
Dec 16, 2023
28
30
"Yo ho, a pirates wife for me..."
View attachment 3215564 View attachment 3215563

"...and a bottle of rum"

View attachment 3215565

Might have gotten the lyrics slightly wrong :p

You don't have permission to view the spoiler content. Log in or register now.
I love the details in your work. You obviously pay a lot of attention to those... fingers, clothes and folds, the ships. There seems to be some possible stitching going on of different parts of the image, are you using PS or other editor outside of SD? How do you use it, at what part of your workflow? I'm enjoying reading your tutorials and what you do.
 

me3

Member
Dec 31, 2016
316
708
I love the details in your work. You obviously pay a lot of attention to those... fingers, clothes and folds, the ships. There seems to be some possible stitching going on of different parts of the image, are you using PS or other editor outside of SD? How do you use it, at what part of your workflow? I'm enjoying reading your tutorials and what you do.
It's all SD, i don't use outside editors for anything else than if i need to downscale/crop or "compress" images.
In the case of the pirate images you see that there's background elements that don't really line up which is often a problem, if i wanted to fix it i'd either use inpainting to replace/remove it, see if another seed had a better fit or "cut" the important element and then layer it on a different background.
Mostly though since everyone here is well aware of the issues and faults in AI images it's not always worth fixing all those. So if the wallpaper doesn't quite match up is less of a concern than 9 fingers on one hand, pick your battles. Also, it's "art", and doesn't art always have its faults and oddities :p
 
  • Like
Reactions: devilkkw and Mr-Fox

me3

Member
Dec 31, 2016
316
708
Just to give an example of the workflow i posted here works and can be used.
Using the prompt from this post.
Starting with a less than good image, you can see how it gets "cleaned up" along the way while maintaining much of the initial look. So if you either have an image or generate a low quality image, it can potentially be "fixed". Or you could generate on a specific model that has what ever design/pose you're after and then use other models to add the detailing and finish to it. There's obviously multiple ways of doing the same, this is just one.

_s1.jpg _s2.jpg _s3.jpg _s4.jpg
 
  • Like
Reactions: Mr-Fox

SDAI-futa

Newbie
Dec 16, 2023
28
30
It's going to be complicated to follow all the rules to post in this thread, but I'm going to try and keep on editing to add things as I work back on my process.

The original image I had looks nothing like the end result, and that's the point. I have over 20 different key stages in between, and hundred of images from batches of inpaints and such in between... so please be patient, I'll answer whatever I can and would love to learn more from everyone.

The end result as of today (I will probably keep working on it):

00030-20240102_142533_856x1136.png This is an image

This is based based on an image of a woman standing (dressed, not trans) I got from a duck duck go image search. I don't have the original JPG, this is the resulting image working with ControlNet with the Depth model.

Here are some of the original details, again, keep in mind there were many transitions in between, I will keep updating this post:

You don't have permission to view the spoiler content. Log in or register now.

You don't have permission to view the spoiler content. Log in or register now.

More details:
You don't have permission to view the spoiler content. Log in or register now.


00014-20231227_124823_768x1024.png

The
 
  • Like
Reactions: VanMortis

devilkkw

Member
Mar 17, 2021
292
982
Today i've downloaded ComfyUi, and starting experimenting. After understanding how load lora and embedding, i have to say i'm really impressed on how it work! Especially on memory optimization. reach 4096 on my 6gb without any oom error. is impressive how it work, i need more experiment but i like how it work.
Also i want to say thank you to all people made help post about it, really useful.
 

devilkkw

Member
Mar 17, 2021
292
982
Ok guy's, i'm really happy you have made me moving to Comfy, after some experiment
You don't have permission to view the spoiler content. Log in or register now.
The real size image was too big to post here, i reduced it at 50%.
And this is my question: how to control upscaler model? In a1111 i have option to chose resize, but i don't find any option in comfy.

Also this is how is made, hope node is correct, i'm just experimentig.
You don't have permission to view the spoiler content. Log in or register now.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
Ok guy's, i'm really happy you have made me moving to Comfy, after some experiment
You don't have permission to view the spoiler content. Log in or register now.
The real size image was too big to post here, i reduced it at 50%.
And this is my question: how to control upscaler model? In a1111 i have option to chose resize, but i don't find any option in comfy.

Also this is how is made, hope node is correct, i'm just experimentig.
You don't have permission to view the spoiler content. Log in or register now.
Any chance you can post the file with the actual workflow? It is much simplier for me to pop it into my CUI, make changes and post it back.
 
  • Like
Reactions: devilkkw