My bad, I meant outfit in a slighly different context. I meant say I want to make a trainer-like game with inventory system where you can dress/undress a chara. The HS2 girls are lovely, but the SD girls are the next level - so I hope to find a workflow. Using HS2 it was rather trivial bunch of bitmaps layers up and down, sliced-and-diced various ways:
You must be registered to see the links
.
And naturally, I think the inpainting with controlnet gets me there - I'd have a chara that I can switch outfits for.
Thanks for the links - time for those girls to put on a fashion show
Maybe someone here can help me out a bit, I am pretty new to AI art/stable diffusion and learn pretty much by trial and error so far.
I mostly try to create pictures of superheroines, a lot of them have unusual skin colours, golden, green, blue etc..
I tried it with prompts like this:
{green_skin}
but it works irregularly, sometimes I get the specified colour on the costume instead, sometimes only parts of the skintone changes. Is there a specific way, with which I can be sure of the results? Or at least "surer?"
Maybe someone here can help me out a bit, I am pretty new to AI art/stable diffusion and learn pretty much by trial and error so far.
I mostly try to create pictures of superheroines, a lot of them have unusual skin colours, golden, green, blue etc..
I tried it with prompts like this:
{green_skin}
but it works irregularly, sometimes I get the specified colour on the costume instead, sometimes only parts of the skintone changes. Is there a specific way, with which I can be sure of the results? Or at least "surer?"
This will increase the weight of the token, make sure to remove any negative prompt like, easynegative, bad pictures, bad_prompt, etc, those try to correct the "right" color skin, but like most things, is a trial and error.
Try also a negative promt like:
natural skin color
You can algo try a rpg/fantasy like a-sovya rpg lora, they have rpg elements wich make more easy to implement such fantasy elements, or try another model checkpoint
Maybe someone here can help me out a bit, I am pretty new to AI art/stable diffusion and learn pretty much by trial and error so far.
I mostly try to create pictures of superheroines, a lot of them have unusual skin colours, golden, green, blue etc..
I tried it with prompts like this:
{green_skin}
but it works irregularly, sometimes I get the specified colour on the costume instead, sometimes only parts of the skintone changes. Is there a specific way, with which I can be sure of the results? Or at least "surer?"
I tend to create a lot of superheroines, happy to see if there's anything I can do to help.
DM me with an example (containing prompts etc in the PNGInfo) and I'll take a look at it.
A good example here of even a pre-trained LoRA not really doing what it should, but being forced into it by a strengthened prompt:
With just "grey skin":
With "((grey skin))" [which BTW is the same as "(grey skin:1.2)"]:
I tend to create a lot of superheroines, happy to see if there's anything I can do to help.
DM me with an example (containing prompts etc in the PNGInfo) and I'll take a look at it.
A good example here of even a pre-trained LoRA not really doing what it should, but being forced into it by a strengthened prompt:
With just "grey skin": View attachment 2715247
Yes, it does!
() are the usual ones for emphasis in SD. {} is used in NovelAI as far as I know. They may be being interpreted correctly by SD, but it's likely not - hence why it's separating "green" and "skin" in "{green_skin}".
It doesn't recognise {green_skin} as a discrete token for any of its embeddings, so it tries to split it down until it gets to words it recognises. Only use dashes and underscores where you want to keep a string whole to use it as a token in its own right (i.e. in a Textual Inversion. Otherwise SD will just treat them like a space.
My bad, I meant outfit in a slighly different context. I meant say I want to make a trainer-like game with inventory system where you can dress/undress a chara. The HS2 girls are lovely, but the SD girls are the next level - so I hope to find a workflow. Using HS2 it was rather trivial bunch of bitmaps layers up and down, sliced-and-diced various ways:
You must be registered to see the links
.
And naturally, I think the inpainting with controlnet gets me there - I'd have a chara that I can switch outfits for.
Thanks for the links - time for those girls to put on a fashion show
I've only had a quick read to catch up on the thread so i might have missed some important details etc, so this might already be suggested or dismissed already.
Since you're planing to use it in a game i'm assuming you already have a trained model in some format to keep the same character.
Depending on how many different outfits you want and your setup, you could potentially train the character with the different outfits so you could keep the character and the "outfit" fixed through various settings and poses. Same method could be applied to "rooms"/backgrounds as well, but there's the obvious question of how much work is "worth it" to spend on just prepping and the time would just be better spent dealing with the slight randomness in generating.
I doubt it'll be this easy, but considering you can merge loras, it'd been rather fun and useful if you could simple merge character and "clothing".
Highly doubt it'd put the character in the clothing, but would be a very good test for the AI "logic" and instructions
Yes, it does!
() are the usual ones for emphasis in SD. {} is used in NovelAI as far as I know. They may be being interpreted correctly by SD, but it's likely not - hence why it's separating "green" and "skin" in "{green_skin}".
It doesn't recognise {green_skin} as a discrete token for any of its embeddings, so it tries to split it down until it gets to words it recognises. Only use dashes and underscores where you want to keep a string whole to use it as a token in its own right (i.e. in a Textual Inversion. Otherwise SD will just treat them like a space.
Can I recommend Styles? I use the ones from this free Patreon post :-
You must be registered to see the links
Makes some interesting outputs from the same basic prompt. Here's some examples from the same model and prompt (different seeds though)
Vector Illustrations
then Indie Game
and finally Black and White
You can get some awesome outputs just by adding a style, and of course can make your own
"Quick" exercise? That must have taken a very long time to generate!
Thank you for doing this, X/Y/Z plots are so useful (if time consuming) to visualise the effects of changing parameters.
Has anyone ever had such colour 'bleeds' or 'patches' in their generated images? I've had the issue today, restarted SD, updated it, but still there in all models tested. Use VAE/no VAE, hires or not, tried different settings. No idea where this comes from. Any help would be very welcome.
Has anyone ever had such colour 'bleeds' or 'patches' in their generated images? I've had the issue today, restarted SD, updated it, but still there in all models tested. Use VAE/no VAE, hires or not, tried different settings. No idea where this comes from. Any help would be very welcome.
View attachment 2725384View attachment 2725385View attachment 2725386View attachment 2725387
I have seen the "colour patches" before, from an over-baked TI being applied at too high a strength. Maybe try reducing the strength a little (Patricia37:0.6 perhaps?)
Also, you've got this in the prompts:
"rich colours hyper realistic lifelike texture dramatic lighting unreal engine trending on artstation cinestill 800"
Without commas or full stops it could be a little unclear how SD's supposed to apply these.
Of course, I could be wrong, it could be something else, but your CFG isn't excessive, everything else looks OK too.
Has anyone ever had such colour 'bleeds' or 'patches' in their generated images? I've had the issue today, restarted SD, updated it, but still there in all models tested. Use VAE/no VAE, hires or not, tried different settings. No idea where this comes from. Any help would be very welcome.
View attachment 2725384View attachment 2725385View attachment 2725386View attachment 2725387
Apologies, I missed that - 15 steps is probably not enough, even with 30 Hi-res steps.
BTW, I'm currently running a few tests to see what the effects of more steps and perhaps other things are. I don't have the "Patricia37" TI as it's a personally-created one of sharlotte's , so that will inherently remove any effects from it from the running.
OK, here's a test with 10, 15, 20, 25 & 30 steps*:
10:
15:
20:
(notably, the patches were still visible until the upscaling got rid of them)
25:
30:
Grid:
My personal preference here would be 30 steps, but 25 works well too.
As I have previously said: "Play around with more or fewer steps to see what works best for what you're aiming for, but be prepared to increase or decrease them at any time"