[Stable Diffusion] Prompt Sharing and Learning Thread

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
My bad, I meant outfit in a slighly different context. I meant say I want to make a trainer-like game with inventory system where you can dress/undress a chara. The HS2 girls are lovely, but the SD girls are the next level - so I hope to find a workflow. Using HS2 it was rather trivial bunch of bitmaps layers up and down, sliced-and-diced various ways: .

And naturally, I think the inpainting with controlnet gets me there - I'd have a chara that I can switch outfits for.

Thanks for the links - time for those girls to put on a fashion show ;)
I'm looking forward to see it when it's done.:)(y)
 
  • Red Heart
Reactions: Sepheyer

botc76

The Crawling Chaos, Bringer of Strange Joy
Donor
Oct 23, 2016
4,476
13,382
Maybe someone here can help me out a bit, I am pretty new to AI art/stable diffusion and learn pretty much by trial and error so far.
I mostly try to create pictures of superheroines, a lot of them have unusual skin colours, golden, green, blue etc..
I tried it with prompts like this:

{green_skin}

but it works irregularly, sometimes I get the specified colour on the costume instead, sometimes only parts of the skintone changes. Is there a specific way, with which I can be sure of the results? Or at least "surer?"
 

Dagg0th

Member
Jan 20, 2022
279
2,746
Maybe someone here can help me out a bit, I am pretty new to AI art/stable diffusion and learn pretty much by trial and error so far.
I mostly try to create pictures of superheroines, a lot of them have unusual skin colours, golden, green, blue etc..
I tried it with prompts like this:

{green_skin}

but it works irregularly, sometimes I get the specified colour on the costume instead, sometimes only parts of the skintone changes. Is there a specific way, with which I can be sure of the results? Or at least "surer?"

Try this instead

(green skin:1.5)

This will increase the weight of the token, make sure to remove any negative prompt like, easynegative, bad pictures, bad_prompt, etc, those try to correct the "right" color skin, but like most things, is a trial and error.

Try also a negative promt like:
natural skin color

You can algo try a rpg/fantasy like a-sovya rpg lora, they have rpg elements wich make more easy to implement such fantasy elements, or try another model checkpoint
 
Last edited:

Jimwalrus

Well-Known Member
Sep 15, 2021
1,045
3,994
Maybe someone here can help me out a bit, I am pretty new to AI art/stable diffusion and learn pretty much by trial and error so far.
I mostly try to create pictures of superheroines, a lot of them have unusual skin colours, golden, green, blue etc..
I tried it with prompts like this:

{green_skin}

but it works irregularly, sometimes I get the specified colour on the costume instead, sometimes only parts of the skintone changes. Is there a specific way, with which I can be sure of the results? Or at least "surer?"
I tend to create a lot of superheroines, happy to see if there's anything I can do to help.
DM me with an example (containing prompts etc in the PNGInfo) and I'll take a look at it.

A good example here of even a pre-trained LoRA not really doing what it should, but being forced into it by a strengthened prompt:
With just "grey skin":
00111-1076702559.png

With "((grey skin))" [which BTW is the same as "(grey skin:1.2)"]:
00112-1076702559.png
 

botc76

The Crawling Chaos, Bringer of Strange Joy
Donor
Oct 23, 2016
4,476
13,382
I tend to create a lot of superheroines, happy to see if there's anything I can do to help.
DM me with an example (containing prompts etc in the PNGInfo) and I'll take a look at it.

A good example here of even a pre-trained LoRA not really doing what it should, but being forced into it by a strengthened prompt:
With just "grey skin":
View attachment 2715247

With "((grey skin))" [which BTW is the same as "(grey skin:1.2)"]:
View attachment 2715248
Is it important what kind of brackets/parentheses you use? Because I've seen quite a few prompt lists, where they only seem to use those {}
 
  • Like
Reactions: Jimwalrus

Jimwalrus

Well-Known Member
Sep 15, 2021
1,045
3,994
Is it important what kind of brackets/parentheses you use? Because I've seen quite a few prompt lists, where they only seem to use those {}
Yes, it does!
() are the usual ones for emphasis in SD. {} is used in NovelAI as far as I know. They may be being interpreted correctly by SD, but it's likely not - hence why it's separating "green" and "skin" in "{green_skin}".
It doesn't recognise {green_skin} as a discrete token for any of its embeddings, so it tries to split it down until it gets to words it recognises. Only use dashes and underscores where you want to keep a string whole to use it as a token in its own right (i.e. in a Textual Inversion. Otherwise SD will just treat them like a space.

BTW, [] are de-emphasis.

There's a good technical summary .
 

me3

Member
Dec 31, 2016
316
708
My bad, I meant outfit in a slighly different context. I meant say I want to make a trainer-like game with inventory system where you can dress/undress a chara. The HS2 girls are lovely, but the SD girls are the next level - so I hope to find a workflow. Using HS2 it was rather trivial bunch of bitmaps layers up and down, sliced-and-diced various ways: .

And naturally, I think the inpainting with controlnet gets me there - I'd have a chara that I can switch outfits for.

Thanks for the links - time for those girls to put on a fashion show ;)
I've only had a quick read to catch up on the thread so i might have missed some important details etc, so this might already be suggested or dismissed already.
Since you're planing to use it in a game i'm assuming you already have a trained model in some format to keep the same character.
Depending on how many different outfits you want and your setup, you could potentially train the character with the different outfits so you could keep the character and the "outfit" fixed through various settings and poses. Same method could be applied to "rooms"/backgrounds as well, but there's the obvious question of how much work is "worth it" to spend on just prepping and the time would just be better spent dealing with the slight randomness in generating.

I doubt it'll be this easy, but considering you can merge loras, it'd been rather fun and useful if you could simple merge character and "clothing".
Highly doubt it'd put the character in the clothing, but would be a very good test for the AI "logic" and instructions:p
 

me3

Member
Dec 31, 2016
316
708
Yes, it does!
() are the usual ones for emphasis in SD. {} is used in NovelAI as far as I know. They may be being interpreted correctly by SD, but it's likely not - hence why it's separating "green" and "skin" in "{green_skin}".
It doesn't recognise {green_skin} as a discrete token for any of its embeddings, so it tries to split it down until it gets to words it recognises. Only use dashes and underscores where you want to keep a string whole to use it as a token in its own right (i.e. in a Textual Inversion. Otherwise SD will just treat them like a space.

BTW, [] are de-emphasis.

There's a good technical summary .
Some other use for [] can be found
 

Sharinel

Active Member
Dec 23, 2018
598
2,509
Can I recommend Styles? I use the ones from this free Patreon post :-

Makes some interesting outputs from the same basic prompt. Here's some examples from the same model and prompt (different seeds though)
Vector Illustrations
00076-1449398350.png

then Indie Game
00085-478806624.png

and finally Black and White
00089-4262300653.png

You can get some awesome outputs just by adding a style, and of course can make your own
 

Sharinel

Active Member
Dec 23, 2018
598
2,509
So I thought I'd do a quick exercise to show how models/checkpoints/styles can change the entire look of the finished pic. I took the below prompt :-

1girl, nsfw, full body, (masterpiece, best quality, ultra-detailed, best shadow), full body, freedom, soul, cyberpunk, perfect anatomy, centered, approaching perfection, dynamic, highly detailed, smooth, sharp focus,
Negative prompt: canvas frame, (high contrast:1.2), (over saturated:1.2), (glossy:1.1), cartoon, 3d, ((disfigured)), ((bad art)), ((b&w)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), Photoshop, video game, ugly, tiling, poorly drawn hands, 3d render
Steps: 50, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3757479247, Size: 544x960, Model hash: 03363589fe, Model: cyberrealistic_v31, Version: v1.3.2

00050-3757479247.png

and thought I would run it through a plot to show how it looks with different models etc. Warning! Large pic
You don't have permission to view the spoiler content. Log in or register now.

Some interesting outcomes. RPG v4 is set up to do d&d characters in armour etc, which is why I think some of its outcomes are a bit weird.
 

devilkkw

Member
Mar 17, 2021
323
1,093
So I thought I'd do a quick exercise to show how models/checkpoints/styles can change the entire look of the finished pic. I took the below prompt :-

1girl, nsfw, full body, (masterpiece, best quality, ultra-detailed, best shadow), full body, freedom, soul, cyberpunk, perfect anatomy, centered, approaching perfection, dynamic, highly detailed, smooth, sharp focus,
Negative prompt: canvas frame, (high contrast:1.2), (over saturated:1.2), (glossy:1.1), cartoon, 3d, ((disfigured)), ((bad art)), ((b&w)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), Photoshop, video game, ugly, tiling, poorly drawn hands, 3d render
Steps: 50, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3757479247, Size: 544x960, Model hash: 03363589fe, Model: cyberrealistic_v31, Version: v1.3.2

View attachment 2722501

and thought I would run it through a plot to show how it looks with different models etc. Warning! Large pic
You don't have permission to view the spoiler content. Log in or register now.

Some interesting outcomes. RPG v4 is set up to do d&d characters in armour etc, which is why I think some of its outcomes are a bit weird.
nice, i like see these test. it is really useful. thank you so much.
 
  • Like
Reactions: Sharinel and Mr-Fox

sharlotte

Member
Jan 10, 2019
299
1,590
Has anyone ever had such colour 'bleeds' or 'patches' in their generated images? I've had the issue today, restarted SD, updated it, but still there in all models tested. Use VAE/no VAE, hires or not, tried different settings. No idea where this comes from. Any help would be very welcome. 00001-2831270652-NSFW, a haselblad bokeh ((photograph)) of a stunning woman, Patricia37, (high...png 00003-1460951807-NSFW, a haselblad bokeh ((photograph)) of a stunning woman, Patricia37, (high...png 00004-1460951808-NSFW, a haselblad bokeh ((photograph)) of a stunning woman, Patricia37, (high...png 00005-1460951809-NSFW, a haselblad bokeh ((photograph)) of a stunning woman, Patricia37, (high...png
 

Jimwalrus

Well-Known Member
Sep 15, 2021
1,045
3,994
Has anyone ever had such colour 'bleeds' or 'patches' in their generated images? I've had the issue today, restarted SD, updated it, but still there in all models tested. Use VAE/no VAE, hires or not, tried different settings. No idea where this comes from. Any help would be very welcome. View attachment 2725384 View attachment 2725385 View attachment 2725386 View attachment 2725387
I have seen the "colour patches" before, from an over-baked TI being applied at too high a strength. Maybe try reducing the strength a little (Patricia37:0.6 perhaps?)
Also, you've got this in the prompts:
"rich colours hyper realistic lifelike texture dramatic lighting unreal engine trending on artstation cinestill 800"
Without commas or full stops it could be a little unclear how SD's supposed to apply these.

Of course, I could be wrong, it could be something else, but your CFG isn't excessive, everything else looks OK too.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,767
Has anyone ever had such colour 'bleeds' or 'patches' in their generated images? I've had the issue today, restarted SD, updated it, but still there in all models tested. Use VAE/no VAE, hires or not, tried different settings. No idea where this comes from. Any help would be very welcome. View attachment 2725384 View attachment 2725385 View attachment 2725386 View attachment 2725387
My bet is there's not enough steps for the renderer to completely flashout the image. Do humor me - please put 20 steps and see if this goes away.
 

Jimwalrus

Well-Known Member
Sep 15, 2021
1,045
3,994
My bet is there's not enough steps for the renderer to completely flashout the image. Do humor me - please put 20 steps and see if this goes away.
Apologies, I missed that - 15 steps is probably not enough, even with 30 Hi-res steps.

BTW, I'm currently running a few tests to see what the effects of more steps and perhaps other things are. I don't have the "Patricia37" TI as it's a personally-created one of sharlotte's , so that will inherently remove any effects from it from the running.
 
Last edited:

Jimwalrus

Well-Known Member
Sep 15, 2021
1,045
3,994
OK, here's a test with 10, 15, 20, 25 & 30 steps*:
10:
00032-1460951807.png
15:
00033-1460951807.png
20:
00034-1460951807.png (notably, the patches were still visible until the upscaling got rid of them)
25:
00035-1460951807.png
30:
00036-1460951807.png
Grid:
xyz_grid-0000-1460951807.png

My personal preference here would be 30 steps, but 25 works well too.

As I have previously said: "Play around with more or fewer steps to see what works best for what you're aiming for, but be prepared to increase or decrease them at any time"


*Without the TI as it's not publicly available.