[Stable Diffusion] Prompt Sharing and Learning Thread

asdfgggggggg

New Member
Nov 3, 2018
1
0
Hey guys, I'm new to SD. I've been following the guide in this thread (which is awesome by the way), and I was wondering if I could ask for some help: I generated a character I really like and I want to be able to consistently generate that character and that artstyle. I tried using the same seed and prompt, but what I noticed was that when I added more and more prompts, the more the artstyle and character changed. Do you guys have any tips for this? What is this artstyle called?
00085-3342861888.png
You don't have permission to view the spoiler content. Log in or register now.
 

CBTWizard

Newbie
Nov 11, 2019
18
18
I tried using the same seed and prompt, but what I noticed was that when I added more and more prompts, the more the artstyle and character changed. Do you guys have any tips for this?
As far as I know, either Img2Img, Controlnet (unreliable/incompatible with Pony), Variation Seeds, or making several images similar to this and training a LoRA based off of them which can take a good chunk of your time depending on the method used but has the most consistency.
 
Last edited:

Frlewdboi

New Member
Jun 23, 2022
8
4
Hi everyone. I have been seriously learning for a week now, have installed forge and comfyUI, have used multiple models, checkpoints, loras and whats not. I have mostly done text2img, dabbed in inpainting but not that much. I understand the concepts of generative AI, and am starting to tame prompts basically.

For now, i have been getting pretty good results with pony and its derivatives. I hit a couple of inherent AI limitations, like describing 2 or more DIFFERENT characters (pretty hard not to when you want to generate ongoing sex).

I managed to get 2. Now the tricky part, 3 and more. And then I discovered forge couple, that litterally lets the user define subzones / sub prompts. Like, describe each character on one prompt line, and the general settings on another. And then have zones cover each other. They dont need to be distinct.

Hurray!

Except ... of course it does not work with pony.

Do you know a way I can make that happen ? I'd rather keep pony, but I can switch toa model that gives good NSFW / anime style too.


Another issue I am facing is trying to make characters on a large background. pony and most character oriented models will render a character, and make it fit the max resolution, which is a pain. i saw a pic on civitai that didnt do that though, but the person who made it was very vague on his answer. He mentioned he achieved that with adetailer and upscaling ?

If anyone can explain this, it would be very helpful. Thanks !
 

CBTWizard

Newbie
Nov 11, 2019
18
18
...except ... of course it does not work with pony.

Do you know a way I can make that happen ? I'd rather keep pony, but I can switch toa model that gives good NSFW / anime style too.
I'm using reForge so I dunno if this applies to you but Forge Coupler works for me and as the github page suggests, use 1boy/1girl tags on every line to help distinguish them, and if you have like 2 characters of the same gender apply 2boy/2girl on both of them as well as the global effect if you have that enabled and add emphasis on them with parentheses (i.e. (2girls)/((2girls))/(2girls:1.3). You can download the picture in the spoiler below and open it up in stable diffusion for an example.

You don't have permission to view the spoiler content. Log in or register now.

I generally use WAI-ANI-NSFW for my anime images if you're wondering.

Another issue I am facing is trying to make characters on a large background. pony and most character oriented models will render a character, and make it fit the max resolution, which is a pain. i saw a pic on civitai that didnt do that though, but the person who made it was very vague on his answer. He mentioned he achieved that with adetailer and upscaling ?
You can try using prompts to help distance the character like "standing in the distance" or just inpainting them in but I can't really say for sure.
 
  • Love
Reactions: DD3DD

Frlewdboi

New Member
Jun 23, 2022
8
4
thanks for your answer.

by "not working" i mean i can install it and "use it", but it does not change any output when enabled or disabled. I tried with the demo prompt, which worked properly with another model. It took me some time to realize it because i managed to get my prompt do mostly what I wanted so i didnt see it at first. using any other SDXL model, it works as intended.

Now trying WAI-ANI-NSFW. forge couple does something, but is too limited. as soon as i try to make the people interact it goes haywire, left person goes right, right person disapears and whatsnot. I will work something around, i will need that later anyway.

I tried using standing in the distance for the characters but it didnt work at all either. Pretty sure it is a pony problem again...

for reference, this is what I am talking about :

the guy who made it wrote this, and I dont understand what he means. did he generate the background with another engine, then painted the characters over ? Or did he generate the characters initially and the scenery around got generated by the upscale ? If it is the latter, i'd really like to understand how exactly.

This is upscaled from 1536-640 with Adetailer for automatic segmenting and inpainting of any characters actually, with the person_yolov8n-seg segmenter
 

CBTWizard

Newbie
Nov 11, 2019
18
18
for reference, this is what I am talking about :

the guy who made it wrote this, and I dont understand what he means. did he generate the background with another engine, then painted the characters over ? Or did he generate the characters initially and the scenery around got generated by the upscale ? If it is the latter, i'd really like to understand how exactly.
Oh you mean that, they just simply used the Adetailer Plugin to add detail to the existing characters in the shot, however, in the picture provided it doesn't seem like it was able to trigger the Adetailer model's detection thresholds seeing as they don't seem to be detailed enough.

The left picture is what it looks like if Adetailer wasn't applied and the right one is what it looked like if it did.
sample.png
This was made using the Adetailer settings from the picture you provided.

You don't have permission to view the spoiler content. Log in or register now.

As for the shot composition, I'm probably guessing it's because of the additional background elements they added to the prompt like "horizon, scenery, sky, ocean, city lights, etc." to help pan out the shot so it doesn't specifically focus on the characters especially since the only prompts for the character in the shot are just "a woman standing on the beach at night" and "1girl".

As for the 2nd character in the shot, it was probably a fluke and the model interpreted the prompt as 2 different characters due to how far apart the two prompts are but they just rolled with it anyway. :p
 
Last edited:

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,117
1,820
Hey guys! I have a question (and sorry that I didn't come back to the other image I asked about, my computer crashed with the pony model so I'll stick to SD 1.5 until I can upgrade).

So I downloaded a lora that helps with spreading ass/pussy, but something is messed up. My workflow creates a low detail version first and then scales up 2x through a second Ksampler - pretty basic.

But while the first version doesn't become too good already, the final result gets even worse. The Lora is supposed to make the pussy and anus look better and more detailed, but details actually get lost in the upscaling process. See here:

1728934954136.png

Can you check my workflow and see what is causing this? I also find the first doesn't become too good either.

Thanks :)

ComfyUI_00772_.png
 
Last edited:

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,117
1,820
But while the first version doesn't become too good already, the final result gets even worse. The Lora is supposed to make the pussy and anus look better and more detailed, but details actually get lost in the upscaling process. See here:

View attachment 4133056

Can you check my workflow and see what is causing this? I also find the first doesn't become too good either.

Thanks :)

View attachment 4133060
okay nvm I figured it out! I just had to give more weight to these prompts and after the first ksampler I'd put another two positive and negative prompt windows inbetween, describe what's missing, add enough denoise + CFG and then it comes out as desired. It's pretty cool as I have so much more control with re-defining the tags halfway through.
 

hkennereth

Member
Mar 3, 2019
237
775
Why is nobody posting here anymore btw? Did you all stop using Stable Diffusion? Is there a new thread or a new AI everyone is using?
I wouldn't say I gave up on it, I'm still quite interested in the technology... but I had some issues with my ComfyUI installation that borked everything, so I need to reinstall it, every plugin, and then rebuild my workflows from scratch. And there's also the fact that if I were to start making images today I would really just want to use FLUX since the model is so much better than Stable Diffusion, but that would mean I would also need to retrain all my custom Loras, or build some complicated multi-step workflows with face replace... and honestly I couldn't find the motivation to do any of that. Maybe I'll get around to it at some point.
 
  • Like
Reactions: Sepheyer