[Stable Diffusion] Prompt Sharing and Learning Thread

picobyte

Active Member
Oct 20, 2017
639
711
predicament_01227_.png predicament_01223_.png predicament_01219_.png predicament_01209_.png predicament_01224_.png
As I also on Civitai, with other examples:
Using ComfyUI, worksheet in metadata. I believe this was the successful strategy to get the core string, Might also work in webui (untried), hitting core took me some time, I was naive, also maybe the strategy can be improved, but this was success at first try (this time):

  1. Start with sdxl checkpoint possibly + lora's of your subject (untested), I included only the LCM one.
  2. sampler to LCM
  3. cfg 5
  4. sgm_uniform,
  5. latent 512x512
  6. 7 steps
  7. no negative
  8. positive: start with your subject token up front, without comma's add words, interleaving the core concept with desired aspects of it but, really slowly, one or a few words at a time: check the output images, if output is not right, move words around, try synonyms, if it doesn't work use other words entirely, slowly build the prompt, there has to be a certain repetition where the core concept is elaborated, it's like you are expanding a string. At a certain point you'll find it doesn't really matter anymore. Then you have your core prompt
  9. use more text to complete the picture, keep it within 75 tokens for best results, use magicprompt for variety in scenes. commas count so better just not, use strong synonyms that have a double meaning either alone or in context.
  10. Place the masterwork, quality, in high quality resolution at the end, along with restrictions, in positive works better than in negative.
  11. then change to 1024x1024, cfg 2.3
Though you get best images if you keep your cfg low, it can actually still produce good images with cfg 5, although a lot have peculiarities. Maybe it is possible to leverage the cfg up even a lot more in 512x512, then maybe you can set it higher in 1024x1024 and force the sampler more to stay on subject while still producing good pictures, or lower the required steps, because that probably is what the cfg influences.

My core concept was a bit perverted (of course ;-), but well.. to each their own, I guess. the core concept was netorare and I found this as a core string: naughty cuckolding smutty netorare hetero exhibitionist girlfriend obviously cheating sexual date experiment, that is probably not even the only one, but one that did work.

Enjoy!
 
Last edited:

SDAI-futa

Newbie
Dec 16, 2023
29
31
"Yo ho, a pirates wife for me..."
View attachment 3215564 View attachment 3215563

"...and a bottle of rum"

View attachment 3215565

Might have gotten the lyrics slightly wrong :p

You don't have permission to view the spoiler content. Log in or register now.
I love the details in your work. You obviously pay a lot of attention to those... fingers, clothes and folds, the ships. There seems to be some possible stitching going on of different parts of the image, are you using PS or other editor outside of SD? How do you use it, at what part of your workflow? I'm enjoying reading your tutorials and what you do.
 

me3

Member
Dec 31, 2016
316
708
I love the details in your work. You obviously pay a lot of attention to those... fingers, clothes and folds, the ships. There seems to be some possible stitching going on of different parts of the image, are you using PS or other editor outside of SD? How do you use it, at what part of your workflow? I'm enjoying reading your tutorials and what you do.
It's all SD, i don't use outside editors for anything else than if i need to downscale/crop or "compress" images.
In the case of the pirate images you see that there's background elements that don't really line up which is often a problem, if i wanted to fix it i'd either use inpainting to replace/remove it, see if another seed had a better fit or "cut" the important element and then layer it on a different background.
Mostly though since everyone here is well aware of the issues and faults in AI images it's not always worth fixing all those. So if the wallpaper doesn't quite match up is less of a concern than 9 fingers on one hand, pick your battles. Also, it's "art", and doesn't art always have its faults and oddities :p
 
  • Like
Reactions: devilkkw and Mr-Fox

me3

Member
Dec 31, 2016
316
708
Just to give an example of the workflow i posted here works and can be used.
Using the prompt from this post.
Starting with a less than good image, you can see how it gets "cleaned up" along the way while maintaining much of the initial look. So if you either have an image or generate a low quality image, it can potentially be "fixed". Or you could generate on a specific model that has what ever design/pose you're after and then use other models to add the detailing and finish to it. There's obviously multiple ways of doing the same, this is just one.

_s1.jpg _s2.jpg _s3.jpg _s4.jpg
 
  • Like
Reactions: Mr-Fox

SDAI-futa

Newbie
Dec 16, 2023
29
31
It's going to be complicated to follow all the rules to post in this thread, but I'm going to try and keep on editing to add things as I work back on my process.

The original image I had looks nothing like the end result, and that's the point. I have over 20 different key stages in between, and hundred of images from batches of inpaints and such in between... so please be patient, I'll answer whatever I can and would love to learn more from everyone.

The end result as of today (I will probably keep working on it):

00030-20240102_142533_856x1136.png This is an image

This is based based on an image of a woman standing (dressed, not trans) I got from a duck duck go image search. I don't have the original JPG, this is the resulting image working with ControlNet with the Depth model.

Here are some of the original details, again, keep in mind there were many transitions in between, I will keep updating this post:

You don't have permission to view the spoiler content. Log in or register now.

You don't have permission to view the spoiler content. Log in or register now.

More details:
You don't have permission to view the spoiler content. Log in or register now.


00014-20231227_124823_768x1024.png

The
 
  • Like
Reactions: VanMortis

devilkkw

Member
Mar 17, 2021
323
1,093
Today i've downloaded ComfyUi, and starting experimenting. After understanding how load lora and embedding, i have to say i'm really impressed on how it work! Especially on memory optimization. reach 4096 on my 6gb without any oom error. is impressive how it work, i need more experiment but i like how it work.
Also i want to say thank you to all people made help post about it, really useful.
 

devilkkw

Member
Mar 17, 2021
323
1,093
Ok guy's, i'm really happy you have made me moving to Comfy, after some experiment
You don't have permission to view the spoiler content. Log in or register now.
The real size image was too big to post here, i reduced it at 50%.
And this is my question: how to control upscaler model? In a1111 i have option to chose resize, but i don't find any option in comfy.

Also this is how is made, hope node is correct, i'm just experimentig.
You don't have permission to view the spoiler content. Log in or register now.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
Ok guy's, i'm really happy you have made me moving to Comfy, after some experiment
You don't have permission to view the spoiler content. Log in or register now.
The real size image was too big to post here, i reduced it at 50%.
And this is my question: how to control upscaler model? In a1111 i have option to chose resize, but i don't find any option in comfy.

Also this is how is made, hope node is correct, i'm just experimentig.
You don't have permission to view the spoiler content. Log in or register now.
Any chance you can post the file with the actual workflow? It is much simplier for me to pop it into my CUI, make changes and post it back.
 
  • Like
Reactions: devilkkw

theMickey_

Engaged Member
Mar 19, 2020
2,193
2,827
And this is my question: how to control upscaler model? In a1111 i have option to chose resize, but i don't find any option in comfy.
There are Custom Nodes that can either rescale or resize your image, and you can also enter any factor by which you want to rescale/resize. Here's just one example:

1704294318040.png
 

devilkkw

Member
Mar 17, 2021
323
1,093
There are Custom Nodes that can either rescale or resize your image, and you can also enter any factor by which you want to rescale/resize. Here's just one example:

View attachment 3229188
Many thank you, added it. I'm impressed in number of extension CUI have o_O!
Any chance you can post the file with the actual workflow? It is much simplier for me to pop it into my CUI, make changes and post it back.
Yes, i just changend the node upscaler with the one suggested by theMickey_

I zipped it because json upload not allowed.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
Many thank you, added it. I'm impressed in number of extension CUI have o_O!

Yes, i just changend the node upscaler with the one suggested by theMickey_

I zipped it because json upload not allowed.
Two items, may be you know them already:

1) To export workflow as a screenshot - right click on the workspace > "Workflow Image" > "Export" > "PNG".

2) There is an extension that manages all the nodes for you allowing to install nodes you are missing with a mere few clicks. You might already have it, if not, you absolutely have to have it:
 
  • Red Heart
Reactions: devilkkw

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
Ok guy's, i'm really happy you have made me moving to Comfy, after some experiment
You don't have permission to view the spoiler content. Log in or register now.
The real size image was too big to post here, i reduced it at 50%.
And this is my question: how to control upscaler model? In a1111 i have option to chose resize, but i don't find any option in comfy.

Also this is how is made, hope node is correct, i'm just experimentig.
You don't have permission to view the spoiler content. Log in or register now.
So, another way to upscale is via latent upscale rather than using a pixel upscaler.

Namely you run a sampler off an empty latent, then upscale the latent the sampler produces, and then run that latent again using the prompt and the model. I attached the workflow - you might know this tho.

Why would you do it rather than using pixel upscale. Main reason is to fill in the additional details using the actual model. Another reason is that some LORAs can burn the first output rather badly and you want to re-refine the latent using the original mode. Thus, you run the first sampler with the LORA and then you re-run the same latent, after a small denoise, using pure checkpoint. Or vise versa - you first run the sampler with the pure checkpoint and then only once upscaled you throw in LORA for refinement.

I attached your workflow where I added two additional samplers. Naturally, you want to have preview nodes thrown in to see what each sampler does - cause this way you can safely downsize the number of steps. There are two latent upscalers - regular one and NNLatent. Each has its own quirks. For 80% you want NNLatent, but sometimes the original latent upscaler is the cat's meow. There are three samplers. The first one uses the original chekpoint, the next two samplers use LORA's models.

workflow (4).png
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
A new way to fix hands with controlnet.



ComfyUi Workflow:

ControlNet Aux:

Hand Inpaint Model:


*I have not tried yet.
 
Last edited:

devilkkw

Member
Mar 17, 2021
323
1,093
So, another way to upscale is via latent upscale rather than using a pixel upscaler.

Namely you run a sampler off an empty latent, then upscale the latent the sampler produces, and then run that latent again using the prompt and the model. I attached the workflow - you might know this tho.

Why would you do it rather than using pixel upscale. Main reason is to fill in the additional details using the actual model. Another reason is that some LORAs can burn the first output rather badly and you want to re-refine the latent using the original mode. Thus, you run the first sampler with the LORA and then you re-run the same latent, after a small denoise, using pure checkpoint. Or vise versa - you first run the sampler with the pure checkpoint and then only once upscaled you throw in LORA for refinement.

I attached your workflow where I added two additional samplers. Naturally, you want to have preview nodes thrown in to see what each sampler does - cause this way you can safely downsize the number of steps. There are two latent upscalers - regular one and NNLatent. Each has its own quirks. For 80% you want NNLatent, but sometimes the original latent upscaler is the cat's meow. There are three samplers. The first one uses the original chekpoint, the next two samplers use LORA's models.

View attachment 3229571
Yes, i was experimenting and just doing some test, added latent upscaler and second pass, seem good to reduce time, but for what i see, is really difficult obtain same result as a1111.
In prompt i've attached, result in CUI are same as your, in a1111 is more close to prompt, because woman have really robotic suite, every render. in cui i get robitic think every 50 test :(
my inexperience of course, experimenting is the key to find what work better.
thank you for suggestion, i will test it soon.
Another really noob question: how load workflow from png?
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
Yes, i was experimenting and just doing some test, added latent upscaler and second pass, seem good to reduce time, but for what i see, is really difficult obtain same result as a1111.
In prompt i've attached, result in CUI are same as your, in a1111 is more close to prompt, because woman have really robotic suite, every render. in cui i get robitic think every 50 test :(
my inexperience of course, experimenting is the key to find what work better.
thank you for suggestion, i will test it soon.
Another really noob question: how load workflow from png?
Click on "Load" in the sidemenu, find the "png" file and open it.
 
  • Like
Reactions: devilkkw

devilkkw

Member
Mar 17, 2021
323
1,093
Thank. i don't have NNupscaler? is an extension?
Also try svd...wow. impressive:eek:
Take a lot of time, but for now i focus for getting better image, then try vid.

Edit.
Sepheyer Testet your workflow and wow. changed latent upscale and original image size to my default (896x1152) this is result with workflow include.
You don't have permission to view the spoiler content. Log in or register now.
i keep try. really i love how comfy work.
 
Last edited:
  • Red Heart
Reactions: Sepheyer

Delambo

Newbie
Jan 10, 2018
99
89
I'm just getting into starting my first VN, learning Ren'py and I am pretty sure AI art as I'm shit at any art myself. It seems to me that A1111 is not nice to a new user, but that it probably has more power and flexibility but CUI is easier to work with? Before I put any more time into A1111, is there a good reason to consider CUI for my assets?
 

devilkkw

Member
Mar 17, 2021
323
1,093
I'm just getting into starting my first VN, learning Ren'py and I am pretty sure AI art as I'm shit at any art myself. It seems to me that A1111 is not nice to a new user, but that it probably has more power and flexibility but CUI is easier to work with? Before I put any more time into A1111, is there a good reason to consider CUI for my assets?
I've started today using CUI, i always use a1111 and switch is not so fast, but after some try and understanding how it work, you start like it much as a1111.
Also check guide on 1st page and some post here is really helpful.
If you just start new Visual novel, i suggest try CUI and evaluate witch work's better for your result.
Btw being skilled require time and try. I use a1111 from the beginning and difference on make same result is CUI is oblivius, just keep try.
If your game are not a commission with end time to respect, just take your time and try it.
 
  • Like
Reactions: Mr-Fox and Sepheyer