[Stable Diffusion] Prompt Sharing and Learning Thread

Jimwalrus

Well-Known Member
Sep 15, 2021
1,047
4,002
I'm just trying to get to grips with one Ui as it is..:cautious:

I have to go play with my font of new found knowledge before my dreams become spaghetti and string :oops:
In all seriousness, I'm sure even Sepheyer would recommend getting to grips with SD in A1111 first, then transitioning that knowledge to CUI.

Just as you'd learn to fly in a Cessna, then progress to a Gulfstream.
The power and control of CUI looks so tempting, but the idea of facing a 2nd learning curve when all I want is to produce hotties who look like celebs (or just hot) puts me off.
 

Lun@

Member
Dec 27, 2023
249
1,501
In all seriousness, I'm sure even Sepheyer would recommend getting to grips with SD in A1111 first, then transitioning that knowledge to CUI.

Just as you'd learn to fly in a Cessna, then progress to a Gulfstream.
The power and control of CUI looks so tempting, but the idea of facing a 2nd learning curve when all I want is to produce hotties who look like celebs (or just hot) puts me off.
I feel I have so much to learn about SD at the moment that anything else will be on the backburner for now.
I've literally spent only a day using SD at this point and it was mostly experimenting :)
 

namhoang909

Newbie
Apr 22, 2017
87
48
"Why not try ComfyUI?"
"ComfyUI can fix that"
"Go on, try ComfyUI..."

Don't switch! You'll go mad and all your dreams will be of spaghetti and string! ;)
:censored: it is quite unfortunate that I have not successfully generated any acceptable image in CUI(even tried efficient pack that gives similar weight as A1111), so while I am interesting in it, upscaling and experimenting are the only things I have done so far.
 

me3

Member
Dec 31, 2016
316
708
In all seriousness, I'm sure even Sepheyer would recommend getting to grips with SD in A1111 first, then transitioning that knowledge to CUI.

Just as you'd learn to fly in a Cessna, then progress to a Gulfstream.
The power and control of CUI looks so tempting, but the idea of facing a 2nd learning curve when all I want is to produce hotties who look like celebs (or just hot) puts me off.
For me there's two main reasons why i use comfy much more than a1111, primarily it's due to the differences in memory usage, i haven't checked in 1.7 but when i can't even fully load an XL model and using controlnet (and other such addons that affect memory usage) without limiting image size for SD15, it gets easier to "use something else".
Second reasons is i generally like to "put stuff together" and experiment to see how things fit/work, which in comfy is much more of a moving puzzle pieces around than having to write code for a1111.

a1111 is a perfectly fine tool for the job, with its faults, same as everything else, and i'd happily use it more if i could. If you don't have the same vram concerns as me (or worse) the choice is probably a bit less straight forward and it's more "i like this one and it does what i need". Free will and options is good :)
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
A few examples of creating variations of essentially the same image in txt2img with openpose. Img2img can be vary useful but for this scenario txt2img is much more capable imo. I leave most settings and prompt (composition, character and pose) the same and only change the context, weather, scenery and outfit. I also switch checkpoint model and upscaler for hiresfix.

I "borrowed" the trenchcoat lady by the eminent Thalies. But since the prompt wasn't included I whipped up a prompt myself.

Trenchcoat lady.png Trenchcoat Lady Pose.png

The prompt and data is included in each png file. Just load it in png info, then send it to txt2img etc.

00004-2883932775.png 00006-2883932775.png 00023-162400741.png 00033-1455400359.png

A few tips:

- Keep the prompt simple and don't use the shotgun approach by adding a ton of tags and phrases without reason.
Don't add too many loras and/or Ti's. Be methodical and only add one element at the time otherwise you don't know what effects what.

- Don't copy the prompt practices you find on civitai etc.
People there doesn't know what they are doing most of the time and you see a lot of shot gunning and throwing everything against the wall to see what sticks.
The images there are of course hand picked and is not at all representative of their workflow or process.

- Don't keep banging your head against the wall.
I very rarely do batches. If you don't get the result you want within a few tries with prompt adjustments etc, then switch checkpoint model. Don't try to force it to do something it has not been trained to do.
The same goes for issues with the eyes etc. Either try a different ckpt or simply fix it after with inpainting etc.
You could of course use an extention like after detailer etc to give SD a helping hand.
 

Thalies

New Member
Sep 24, 2017
13
50
A few examples of creating variations of essentially the same image in txt2img with openpose. Img2img can be vary useful but for this scenario txt2img is much more capable imo. I leave most settings and prompt (composition, character and pose) the same and only change the context, weather, scenery and outfit. I also switch checkpoint model and upscaler for hiresfix.

I "borrowed" the trenchcoat lady by the eminent Thalies. But since the prompt wasn't included I whipped up a prompt myself.

View attachment 3294318 View attachment 3294405

The prompt and data is included in each png file. Just load it in png info, then send it to txt2img etc.

View attachment 3294344 View attachment 3294343 View attachment 3294345 View attachment 3294352

A few tips:

- Keep the prompt simple and don't use the shotgun approach by adding a ton of tags and phrases without reason.
Don't add too many loras and/or Ti's. Be methodical and only add one element at the time otherwise you don't know what effects what.

- Don't copy the prompt practices you find on civitai etc.
People there doesn't know what they are doing most of the time and you see a lot of shot gunning and throwing everything against the wall to see what sticks.
The images there are of course hand picked and is not at all representative of their workflow or process.

- Don't keep banging your head against the wall.
I very rarely do batches. If you don't get the result you want within a few tries with prompt adjustments etc, then switch checkpoint model. Don't try to force it to do something it has not been trained to do.
The same goes for issues with the eyes etc. Either try a different ckpt or simply fix it after with inpainting etc.
You could of course use an extention like after detailer etc to give SD a helping hand.
Indeed, I might have gone a bit overboard with the 'shotgun method' of adding tons of tags and phrases to my prompts.The reason? I let GPT-4 create the prompts for me!:ROFLMAO:
 
  • Like
Reactions: Mr-Fox and Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,571
3,768
Indeed, I might have gone a bit overboard with the 'shotgun method' of adding tons of tags and phrases to my prompts.The reason? I let GPT-4 create the prompts for me!:ROFLMAO:
- ChatGPT bro, create a prompt for the Little Red Riding Hood porn film actress costume.
- It is important to respect the women's feelings and concerns, thus the director and the actress should work together to create outcome acceptable to all parties.
- ...
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I'm moving a convo with Thalies to this thread since I think it could help others also.

" Hi Mr. Fox,
As someone currently exploring the capabilities of Stable Diffusion through Fooocus, I'm reaching a point where I'm considering whether to continue with this tool or change to A1111.
I’m also curious about any addons you’ve found particularly beneficial.
Thank you. "

I have no experience with Fooocus or other simplified Ui's so I can't help you with those.
I'll give you my opinion though. They are likely based on A1111 (my speculation) and only a "dumbed-down" version so you might as well use the real thing.
Even if this is not the case, A1111 is mainstream at this point and you can find much more information and guides about it, as well as extensions etc. It's really not that complicated.
Just start out simple with the basics and go on from there.
Install A1111 first and start familiarize yourself with it. Then the first 2 extensions you should get imo is controlnet and after detailer.
They might be included by default at this point though.
Sd Upscale Script for img2img (not ultimate upscale) if you wish to upscale rather than using hiresfix for some reason.
If you are using refiner, hiresfix doesn't work well so I upscale in img2img then instead.
If you want to swap faces for your own or celebrities etc, get reactor.

I recommend getting NMKD-Siax sampler and Ultrasharp also.



You can install A1111 two ways. Either get the install exe or if you are familiar with github do a git clone.

A1111 install guide Sebastian Kamph:
Using A1111 Sebastian Kamph:
 
Last edited:
  • Heart
Reactions: Sepheyer

shakus-gravus

Member
May 24, 2020
111
172
Appreciate all the info shared in this thread. I am only just starting to mess with AI image generation. Is anyone aware of a good series of videos that unpack how to get a1111 all setup without focusing on every little nerd-knob and feature? Basically, I'm looking for an a1111 Deployment Guide along the lines of a Quick-Start guide that gets you up and running with everything you need as fast as possible so that you can spend more time creating than configuring the tool you're using to create ;-)
 
  • Like
Reactions: Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I took a quick but gentle stab at Lun@'s lovely succubus. Firstly and foremost because I really liked the image and I was also curious, secondly to show an example of how to structure the prompt.

Example from PromptGeek's awesome ebook:

" [STYLE OF PHOTO] photo of a [SUBJECT],
[IMPORTANT FEATURE], [MORE DETAILS], [POSE OR ACTION],
[FRAMING], [SETTING/BACKGROUND], [LIGHTING],
[CAMERA ANGLE], [CAMERA PROPERTIES],
in style of [PHOTOGRAPHER] "


I use a similar structure but not exactly the same.

If you wish you can compare the two prompts. Not to be pointing finger at Lun@ but for learning.
I grabbed the essentials but simplified it and corrected typo's etc. You could refine it more and expand but it's not my project so I leave that to Lun@.

This was the first and only image I generated.
00038-4232349330.png

I made a post recently about Prompt Geeks's book:
https://f95zone.to/threads/stable-diffusion-prompt-sharing-and-learning-thread.146036/post-12775145
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Appreciate all the info shared in this thread. I am only just starting to mess with AI image generation. Is anyone aware of a good series of videos that unpack how to get a1111 all setup without focusing on every little nerd-knob and feature? Basically, I'm looking for an a1111 Deployment Guide along the lines of a Quick-Start guide that gets you up and running with everything you need as fast as possible so that you can spend more time creating than configuring the tool you're using to create ;-)
https://f95zone.to/threads/stable-diffusion-prompt-sharing-and-learning-thread.146036/post-12790594

or just scroll up.. :p;)
 
  • Like
Reactions: Sepheyer

devilkkw

Member
Mar 17, 2021
324
1,094
I'll throw out that for a photo realistic but blurry or almost green screened effect the LCM sampler can produce high quality images. LCM can also make a smooth almost 3D looking image when doing image to image without negative prompts just lower the config scale.

The DDPM sampler is my go to now for photoreal even over the new huenpp2.

I can stress the importance of adjusting the config scaling as some samplers will be horrible at the default 8

also...



View attachment 3293599
Really ddpm for realism? in my test it wash out skin and get like 3d render (like daz3d) image.
The best for me, in CUI, is unipc with those trick:
made half size image with unipcbh2, 5-6 step, normal scheduler, 6 to 8 cfg (prompt depending).
Upscale x2 latent (to get finel size you want) then pass the latent to another ksampler, using cfg 5 to 7, exponential sceduler, 20/25 step.
With this you get good level of detail and good realism.
For example fantasy:
kkw_original_00145_.png-w.jpg
And real:
kkw_original_00083_.png-w.jpg

I used my embedding for photoreal and skin detail, using trick i suggested.
Also i don't know if is different with checkpoint, i have only my self worked checkpoint and don't use any other, need to test if sampler work different in other checkpoint, never had a test on those way.
Can you do some test on this way?
BTW, thx for lora, did you made a 1.5 version?
 

me3

Member
Dec 31, 2016
316
708
I took a quick but gentle stab at Lun@'s lovely succubus. Firstly and foremost because I really liked the image and I was also curious, secondly to show an example of how to structure the prompt.

Example from PromptGeek's awesome ebook:

" [STYLE OF PHOTO] photo of a [SUBJECT],
[IMPORTANT FEATURE], [MORE DETAILS], [POSE OR ACTION],
[FRAMING], [SETTING/BACKGROUND], [LIGHTING],
[CAMERA ANGLE], [CAMERA PROPERTIES],
in style of [PHOTOGRAPHER] "


I use a similar structure but not exactly the same.

If you wish you can compare the two prompts. Not to be pointing finger at Lun@ but for learning.
I grabbed the essentials but simplified it and corrected typo's etc. You could refine it more and expand but it's not my project so I leave that to Lun@.

This was the first and only image I generated.
View attachment 3294744

I made a post recently about Prompt Geeks's book:
https://f95zone.to/threads/stable-diffusion-prompt-sharing-and-learning-thread.146036/post-12775145
Borrowing most of lun@'s prompt and since there was a large number of comic artists mentioned it seems a bit wrong not to have some more "comic" like images.

PB-_temp_qvukp_00001_.png PB-_temp_ilhly_00001_.png


And something else...
PB-_temp_orgxp_00001_.png PB-_temp_xppld_00001_.png

All is the same prompt, just different models.
You don't have permission to view the spoiler content. Log in or register now.
 

Microtom

Well-Known Member
Sep 5, 2017
1,153
4,250
Ok so I made a post previously about guided training to easy the formation of the neural network. I made a first run.

So the AI associate words with image components. But it has to identify them somehow. For something like a pussy, it has multiple parts that aren't super obvious. The AI might never be able to distinguish them. That's why pussy, hands and other complexe parts are hard for it.

So, the solution is to guide it to make associations. You create an image that contains two identical images. On on side you color the region of the concept you want to teach the AI. It makes the association quickly this way.

Here are examples of such images.

You don't have permission to view the spoiler content. Log in or register now.


For this first attempt, the dataset had about 140 images. 35 were pussy close-ups, either in front side or back side.

The training prompts look like this:
Code:
Color-associated regions in two identical side-by-side photographs of the front side view of a spread pussy.
Shaven pussy.
The magenta region is the labia majora.
The green region is the pussy lips or labia minora.
The red region is the clitoris hood. The blue region is the clitoris.
The cyan region is the closed anus or closed asshole.
The yellow region is the slightly opened vaginal opening.
So each region is given a description to associate to.

The dataset didn't have duplicates, if I remember correctly, I did 10 epochs of 6 repeats at 4 batch size.

So, with the lora it gave me, I can ask it to just generate a pussy. Example:
You don't have permission to view the spoiler content. Log in or register now.

As can also ask to generate a pussy and identify a region by giving it a color. A failed attempt:

You don't have permission to view the spoiler content. Log in or register now.

Again, but a successful attempt:

You don't have permission to view the spoiler content. Log in or register now.

I can also ask to show multiple regions like in the training photos.
You don't have permission to view the spoiler content. Log in or register now.

The red color was used to identify the clitoris hood, but it may have been mistaken for the vaginal walls that are seen in some photos.

The magenta might also be problematic. I might have to pick more distinctive colors. But there are just so many colors and I'm not sure if I can use the same for different concepts.

So, for a training set of just 35 pussy close-up, that's pretty fucking good imo. This same method would work well for fingers too.
 
Last edited:

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
Hey guys! After A1 crashed my computer back then, I'd like to give the whole thing another shot, but with Comfy GUI. Can someone point me to a good installation / setup guide, or should I just go to youtube and roll with whatever is high rated?
 

felldude

Active Member
Aug 26, 2017
572
1,695
BTW, thx for lora, did you made a 1.5 version?
Not with that dataset but I have an old 1.5 Lora that is on my civitai page.

Also I've noticed that some lora's if the clip is highly trained will require a cfg scale in the 20's to generate good results image to image.

All of these examples are image to image at around .5 (PNG so the data is in the file)

LCM soft background and soft features.
ComfyUI_00533_.png


DDPM 14 cfg no negatives.
ComfyUI_00541_.png


And the final one is an example of not having "AI girl" show up do to strong negatives using a a Lora at -1.0 and .5
ComfyUI_00540_.png

Final is native up scaled to 1280x1536
(I'd fix the color and eye size asymmetry normally....and probably miss something big like 6 fingers :D)

ComfyUI_00542_.png
 
Last edited:

namhoang909

Newbie
Apr 22, 2017
87
48
Housekeeping regarding settings:
  1. Go to tab "Parameters", set "max_new_tokens" to the very max. As of today, the fresh install comes with max of 2000.
  2. Go to tab "Chat settings", then subtab "Instruction Template", choose "guanaco non-chat".
  3. Go to tab "Text generation", input field, and then: "Write a script for the red riding hood porn film." and watch the magic happen.
I followed the instruction to 3rd step, it said there was no model, so I chose downloaded model at Model tab then it threw this error 'OSError: It looks like the config file at 'models\guanaco-7B.ggmlv3.q4_1.bin' is not a valid JSON file.' Did I miss something?
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,571
3,768
I followed the instruction to 3rd step, it said there was no model, so I chose downloaded model at Model tab then it threw this error 'OSError: It looks like the config file at 'models\guanaco-7B.ggmlv3.q4_1.bin' is not a valid JSON file.' Did I miss something?
So, this language model setup changed since then and regretfully no longer rather valid. The language models are downloaded and installed somewhat differently now.

In the "model" tab, go to field "Download model or LoRA" on the right and paste there the link to a model: TheBloke/Xwin-MLewd-7B-V0.2-AWQ

Then click download.

Now, what happened between here and June when I posted that message, is the UI changed and the new set of models, the AWQ ones got introduced. So now one downloads the models via the UI because there is actually a bunch of files downloaded.

Then, once the download is ready and you choose the model, make sure the type of the model is "AWQ". Good luck.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Hey guys! After A1 crashed my computer back then, I'd like to give the whole thing another shot, but with Comfy GUI. Can someone point me to a good installation / setup guide, or should I just go to youtube and roll with whatever is high rated?
Learn ComfyUi part one of a series by Olivio Sarikas:

Sepheyer and me3, also a couple more are our local ComfyUi experts. I'm sure they will help you also.

Who knows me and Jim might be persuaded to give it a go also. I know that I for one is intrigued and curious about it but out of boneheaded-ness not taken the plunge yet. ;)
 
Last edited: