[Stable Diffusion] Prompt Sharing and Learning Thread

asdfgggggggg

New Member
Nov 3, 2018
1
0
Hey guys, I'm new to SD. I've been following the guide in this thread (which is awesome by the way), and I was wondering if I could ask for some help: I generated a character I really like and I want to be able to consistently generate that character and that artstyle. I tried using the same seed and prompt, but what I noticed was that when I added more and more prompts, the more the artstyle and character changed. Do you guys have any tips for this? What is this artstyle called?
00085-3342861888.png
You don't have permission to view the spoiler content. Log in or register now.
 

CBTWizard

Newbie
Nov 11, 2019
18
18
I tried using the same seed and prompt, but what I noticed was that when I added more and more prompts, the more the artstyle and character changed. Do you guys have any tips for this?
As far as I know, either Img2Img, Controlnet (unreliable/incompatible with Pony), Variation Seeds, or making several images similar to this and training a LoRA based off of them which can take a good chunk of your time depending on the method used but has the most consistency.
 
Last edited:

Frlewdboi

New Member
Jun 23, 2022
8
4
Hi everyone. I have been seriously learning for a week now, have installed forge and comfyUI, have used multiple models, checkpoints, loras and whats not. I have mostly done text2img, dabbed in inpainting but not that much. I understand the concepts of generative AI, and am starting to tame prompts basically.

For now, i have been getting pretty good results with pony and its derivatives. I hit a couple of inherent AI limitations, like describing 2 or more DIFFERENT characters (pretty hard not to when you want to generate ongoing sex).

I managed to get 2. Now the tricky part, 3 and more. And then I discovered forge couple, that litterally lets the user define subzones / sub prompts. Like, describe each character on one prompt line, and the general settings on another. And then have zones cover each other. They dont need to be distinct.

Hurray!

Except ... of course it does not work with pony.

Do you know a way I can make that happen ? I'd rather keep pony, but I can switch toa model that gives good NSFW / anime style too.


Another issue I am facing is trying to make characters on a large background. pony and most character oriented models will render a character, and make it fit the max resolution, which is a pain. i saw a pic on civitai that didnt do that though, but the person who made it was very vague on his answer. He mentioned he achieved that with adetailer and upscaling ?

If anyone can explain this, it would be very helpful. Thanks !
 

CBTWizard

Newbie
Nov 11, 2019
18
18
...except ... of course it does not work with pony.

Do you know a way I can make that happen ? I'd rather keep pony, but I can switch toa model that gives good NSFW / anime style too.
I'm using reForge so I dunno if this applies to you but Forge Coupler works for me and as the github page suggests, use 1boy/1girl tags on every line to help distinguish them, and if you have like 2 characters of the same gender apply 2boy/2girl on both of them as well as the global effect if you have that enabled and add emphasis on them with parentheses (i.e. (2girls)/((2girls))/(2girls:1.3). You can download the picture in the spoiler below and open it up in stable diffusion for an example.

You don't have permission to view the spoiler content. Log in or register now.

I generally use WAI-ANI-NSFW for my anime images if you're wondering.

Another issue I am facing is trying to make characters on a large background. pony and most character oriented models will render a character, and make it fit the max resolution, which is a pain. i saw a pic on civitai that didnt do that though, but the person who made it was very vague on his answer. He mentioned he achieved that with adetailer and upscaling ?
You can try using prompts to help distance the character like "standing in the distance" or just inpainting them in but I can't really say for sure.
 

Frlewdboi

New Member
Jun 23, 2022
8
4
thanks for your answer.

by "not working" i mean i can install it and "use it", but it does not change any output when enabled or disabled. I tried with the demo prompt, which worked properly with another model. It took me some time to realize it because i managed to get my prompt do mostly what I wanted so i didnt see it at first. using any other SDXL model, it works as intended.

Now trying WAI-ANI-NSFW. forge couple does something, but is too limited. as soon as i try to make the people interact it goes haywire, left person goes right, right person disapears and whatsnot. I will work something around, i will need that later anyway.

I tried using standing in the distance for the characters but it didnt work at all either. Pretty sure it is a pony problem again...

for reference, this is what I am talking about :

the guy who made it wrote this, and I dont understand what he means. did he generate the background with another engine, then painted the characters over ? Or did he generate the characters initially and the scenery around got generated by the upscale ? If it is the latter, i'd really like to understand how exactly.

This is upscaled from 1536-640 with Adetailer for automatic segmenting and inpainting of any characters actually, with the person_yolov8n-seg segmenter
 

CBTWizard

Newbie
Nov 11, 2019
18
18
for reference, this is what I am talking about :

the guy who made it wrote this, and I dont understand what he means. did he generate the background with another engine, then painted the characters over ? Or did he generate the characters initially and the scenery around got generated by the upscale ? If it is the latter, i'd really like to understand how exactly.
Oh you mean that, they just simply used the Adetailer Plugin to add detail to the existing characters in the shot, however, in the picture provided it doesn't seem like it was able to trigger the Adetailer model's detection thresholds seeing as they don't seem to be detailed enough.

The left picture is what it looks like if Adetailer wasn't applied and the right one is what it looked like if it did.
sample.png
This was made using the Adetailer settings from the picture you provided.

You don't have permission to view the spoiler content. Log in or register now.

As for the shot composition, I'm probably guessing it's because of the additional background elements they added to the prompt like "horizon, scenery, sky, ocean, city lights, etc." to help pan out the shot so it doesn't specifically focus on the characters especially since the only prompts for the character in the shot are just "a woman standing on the beach at night" and "1girl".

As for the 2nd character in the shot, it was probably a fluke and the model interpreted the prompt as 2 different characters due to how far apart the two prompts are but they just rolled with it anyway. :p
 
Last edited:

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,956
Hey guys! I have a question (and sorry that I didn't come back to the other image I asked about, my computer crashed with the pony model so I'll stick to SD 1.5 until I can upgrade).

So I downloaded a lora that helps with spreading ass/pussy, but something is messed up. My workflow creates a low detail version first and then scales up 2x through a second Ksampler - pretty basic.

But while the first version doesn't become too good already, the final result gets even worse. The Lora is supposed to make the pussy and anus look better and more detailed, but details actually get lost in the upscaling process. See here:

1728934954136.png

Can you check my workflow and see what is causing this? I also find the first doesn't become too good either.

Thanks :)

ComfyUI_00772_.png
 
Last edited:

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,956
But while the first version doesn't become too good already, the final result gets even worse. The Lora is supposed to make the pussy and anus look better and more detailed, but details actually get lost in the upscaling process. See here:

View attachment 4133056

Can you check my workflow and see what is causing this? I also find the first doesn't become too good either.

Thanks :)

View attachment 4133060
okay nvm I figured it out! I just had to give more weight to these prompts and after the first ksampler I'd put another two positive and negative prompt windows inbetween, describe what's missing, add enough denoise + CFG and then it comes out as desired. It's pretty cool as I have so much more control with re-defining the tags halfway through.
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,956
Why is nobody posting here anymore btw? Did you all stop using Stable Diffusion? Is there a new thread or a new AI everyone is using?
 
  • Like
Reactions: Sepheyer

hkennereth

Member
Mar 3, 2019
237
775
Why is nobody posting here anymore btw? Did you all stop using Stable Diffusion? Is there a new thread or a new AI everyone is using?
I wouldn't say I gave up on it, I'm still quite interested in the technology... but I had some issues with my ComfyUI installation that borked everything, so I need to reinstall it, every plugin, and then rebuild my workflows from scratch. And there's also the fact that if I were to start making images today I would really just want to use FLUX since the model is so much better than Stable Diffusion, but that would mean I would also need to retrain all my custom Loras, or build some complicated multi-step workflows with face replace... and honestly I couldn't find the motivation to do any of that. Maybe I'll get around to it at some point.
 
  • Like
Reactions: Sepheyer

Sharinel

Active Member
Dec 23, 2018
598
2,511
My recent image creation technique using Forge.

Step 1 - create images with Pony (specifically )
Step 2 - Inpainting with Flux
Step 3 - profit??meme

Example - Doing a halloween themed batch, heres one that I got. This is the raw output with no touching up of the eyes of hands etc.
00069-2627708411.png

Then I send it to inpainting and set that up for flux-dev. I set the tab to 'inpaint not masked' and then mask out the bits that Flux doesn't like (I bet you can guess which parts)

1729546468435.png
I'm sure you could change things like the sampling steps etc but I'm lazy, and my 4090 doesn't take that long to redo the image. Which leads to :-
00110-238924563.png

Has the added bonus of fixing the hands/eyes right away (no adetailer was used, don't find it works too well for me with flux)

Comparison -

A lil tip for you all :)
 

Dir.Fred

Member
Sep 20, 2021
206
584
Just want to add that I was a complete AI art novice a few days ago and now I'm generating some very fun, and some very dubious, output from Flux directly in Forge. It's really good at understanding literal plain English prompts as long as you stick to a decent structure.

Here's my prompt. Read it while looking at the picture and I'm sure you'll agree that it's not hard to make funky art any more. (Monster tag, I guess;)):
A realistic uncensored ultraHD candid side view monochrome photo of a very small skinny woman wearing a black choker, topless black corset, very long black latex gloves, black sheer nylon holdup stockings, and black stiletto heel pumps.
She has the flat chest, lithe body and toned limbs of a young gymnast or acrobat.
She has two torsos and four legs. Her lower torso is attached like a centaur.
She is totally bald and has pale skin as white as snow with heavy gothic makeup and black lips.

She looks sad and has her head bowed low with both arms behind her head, next to an infinity pool in a dark shadowy corner inside a vast empty concrete hall.
Here's my result. It is, admittedly, cherry picked, but I got a dozen good results out of a half hour batch on my 3090. There are a lot of good cherries, ymmv, is what I'm saying:

You don't have permission to view the spoiler content. Log in or register now.

Here's my why-tf:
I saw a freaky Lora for four legged people and had just re-watched Dune. So I combined it with a Geidi Prime Lora to spawn a Harkonnen pet.



I'm using Forge(webui_forge_cu121_torch231.7z), the Flux 1 Dev checkpoint, and a couple of Loras I downloaded from civitai (Humantaur and Giedi Prime).

I'm shocked at how quickly I went from skeptic to convert.

Flux with Loras and/or Pony with Flux retouching as per Sharinel's great guide on masking can do pretty much anything you can imagine, and you don't even need current generation gaming hardware to get very reasonable rendering times.
 

Dir.Fred

Member
Sep 20, 2021
206
584
Try one of the Civitai BJ Loras with Flux, or use a Pony Merge like Uberrealistic porn merge with a straightforward prompt to spit out a dozen, then mask and Flux retouch the best sample(s).
 

pazhentaigame

New Member
Jun 16, 2020
14
3
will there be ever load time improvement for future model
I had try XL Pony flux
I know it is good
but the load time just kill my patein
and send me back to use 1.5
well... It is a silly question
I know it will never look back just a rant now :cry:
 

sharlotte

Member
Jan 10, 2019
303
1,594
No easy answer if you are talking about generating time. Flux is relatively new, but the speed of generation is getting better and with less VRAM GPU. See this one: on how to (possibly) improve generation speed. There are loads of videos on youtube by people like Olivio Sarrikas, Nerdy Rodent, ... who share new techniques, new features on a weekly basis.
And if you know what images you want to generate and don't have the GPU power for it, you could always 'rent' a graphic card from sites such as runpod, upload your flows and models (that in itself takes time) and then generate away. Of course that isn't free but might be a lot cheaper than buying the next 5090 ;)
 

Rais

Newbie
Sep 10, 2019
51
46
Hi guys.
Iam looking for someone who can actually use regional prompting on comfyui. I have some knowledge and observation about it but reasult is poor(in my opinion). Here is some my work without regional and with region. akali_0025.png ComfyUI_temp_nvnup_00005_.png