[Stable Diffusion] Prompt Sharing and Learning Thread

felldude

Active Member
Aug 26, 2017
511
1,502
might have to stop using this . my elecitc bill jumped 300 or more a month. (new bill is $765).......wtf... anyone notice their bill higher? does SD work like cryptomining? if so, im quitting :eek:
It does work that way yes, but even the most expensive card I am aware of the A100 80GB can only pull 300W

Most cards are capped at 130W

1800W is the cap for 115V wall outlets in the USA, which is something like a hair dryer, water kettle or portable heater
Average cost to run that 24 hours non stop is around $5 ($15 for Denmark)

Unless your running 20 GPU's 24/hrs a day $300 dollar jump seems steep. Provided you don't live in Denmark.
 

Frogface29

Newbie
Feb 22, 2022
43
41
Hey guys I really want to get into stable diffusion but I am kind of lost right now. My laptop is too bad with 2x4GB of ram to run it locally, so I am trying to use google collab (pro) but that does work for some reason. Is there anyone out there who is also using google collab and who could help me out/share their setup or what worked for them?
 
  • Like
Reactions: Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Liiike, I think I came to realization that it won't be as simple as I originally thought it would be. So, the idea is kinda discarded.

View attachment 2712640
Yeah but now that you are using controlnet and open pose and have a ton more knowledge and experience it could be worth a try again.. If for no other reason, a learning exercise. If you pull it off, it could be a template or foundation for future projects.
 
  • Red Heart
Reactions: Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Hey guys I really want to get into stable diffusion but I am kind of lost right now. My laptop is too bad with 2x4GB of ram to run it locally, so I am trying to use google collab (pro) but that does work for some reason. Is there anyone out there who is also using google collab and who could help me out/share their setup or what worked for them?
Though having a lot of ram is not a bad thing, however it's the vram that matters. Meaning, how much video memory your GPU has.. If it has at least 6Gb vram you can use SD, I have seen people with even less make it work, just don't expect large resolutions to come easy. There are ways though for people with low vram to get around it with tile upscaling.
 
  • Like
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,528
3,598
Yeah but now that you are using controlnet and open pose and have a ton more knowledge and experience it could be worth a try again.. If for no other reason, a learning exercise. If you pull it off, it could be a template or foundation for future projects.
I will probably eventually get there; for the time being I am trying to wrap my head around how I can make a single chara and have her have different outfits. Looks like the inpainting and the control net is the answer. These past few days I "wasted" on a dead end Tile control net. Turns out to properly function with ComfyUI it needs 30mb of video memory more than I currently have... dog, icecream, scates :)

So, yeah, I hope to eventually make it there ;)
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
I will probably eventually get there; for the time being I am trying to wrap my head around how I can make a single chara and have her have different outfits. Looks like the inpainting and the control net is the answer. These past few days I "wasted" on a dead end Tile control net. Turns out to properly function with ComfyUI it needs 30mb of video memory more than I currently have... dog, icecream, scates :)

So, yeah, I hope to eventually make it there ;)
There are Lora's , TI's etc for clothing styles. Though the entire subject is always part of the training, using one of them with a low weight value can help getting the clothes you're after.


Example:
1687362563367.png
Source:

Example:
1687362726080.png 1687362758524.png
Source:
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,528
3,598
There are Lora's , TI's etc for clothing styles. Though the entire subject is always part of the training, using one of them with a low weight value can help getting the clothes you're after.


Example:
View attachment 2713254
Source:

Example:
View attachment 2713258 View attachment 2713260
Source:
My bad, I meant outfit in a slighly different context. I meant say I want to make a trainer-like game with inventory system where you can dress/undress a chara. The HS2 girls are lovely, but the SD girls are the next level - so I hope to find a workflow. Using HS2 it was rather trivial bunch of bitmaps layers up and down, sliced-and-diced various ways: .

And naturally, I think the inpainting with controlnet gets me there - I'd have a chara that I can switch outfits for.

Thanks for the links - time for those girls to put on a fashion show ;)
 
  • Like
Reactions: Mr-Fox

modine2021

Member
May 20, 2021
362
1,165
It does work that way yes, but even the most expensive card I am aware of the A100 80GB can only pull 300W

Most cards are capped at 130W

1800W is the cap for 115V wall outlets in the USA, which is something like a hair dryer, water kettle or portable heater
Average cost to run that 24 hours non stop is around $5 ($15 for Denmark)

Unless your running 20 GPU's 24/hrs a day $300 dollar jump seems steep. Provided you don't live in Denmark.
i dont run it 24/7. just down time when bored. i msotly work in c4d sustacne/mari adn daz...using a rtx 3060 and rzen 2700. with 750 psu. im in US... utilities putting in these "Smart Meters" now. (spyware my opnion).. i contacted my electric copany about the rise. clearly something being read wrong. my bill was never over 120 at most
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
My bad, I meant outfit in a slighly different context. I meant say I want to make a trainer-like game with inventory system where you can dress/undress a chara. The HS2 girls are lovely, but the SD girls are the next level - so I hope to find a workflow. Using HS2 it was rather trivial bunch of bitmaps layers up and down, sliced-and-diced various ways: .

And naturally, I think the inpainting with controlnet gets me there - I'd have a chara that I can switch outfits for.

Thanks for the links - time for those girls to put on a fashion show ;)
I'm looking forward to see it when it's done.:)(y)
 
  • Red Heart
Reactions: Sepheyer

botc76

The Crawling Chaos, Bringer of Strange Joy
Donor
Oct 23, 2016
4,421
13,198
Maybe someone here can help me out a bit, I am pretty new to AI art/stable diffusion and learn pretty much by trial and error so far.
I mostly try to create pictures of superheroines, a lot of them have unusual skin colours, golden, green, blue etc..
I tried it with prompts like this:

{green_skin}

but it works irregularly, sometimes I get the specified colour on the costume instead, sometimes only parts of the skintone changes. Is there a specific way, with which I can be sure of the results? Or at least "surer?"
 

Dagg0th

Member
Jan 20, 2022
204
1,998
Maybe someone here can help me out a bit, I am pretty new to AI art/stable diffusion and learn pretty much by trial and error so far.
I mostly try to create pictures of superheroines, a lot of them have unusual skin colours, golden, green, blue etc..
I tried it with prompts like this:

{green_skin}

but it works irregularly, sometimes I get the specified colour on the costume instead, sometimes only parts of the skintone changes. Is there a specific way, with which I can be sure of the results? Or at least "surer?"

Try this instead

(green skin:1.5)

This will increase the weight of the token, make sure to remove any negative prompt like, easynegative, bad pictures, bad_prompt, etc, those try to correct the "right" color skin, but like most things, is a trial and error.

Try also a negative promt like:
natural skin color

You can algo try a rpg/fantasy like a-sovya rpg lora, they have rpg elements wich make more easy to implement such fantasy elements, or try another model checkpoint
 
Last edited:

Jimwalrus

Active Member
Sep 15, 2021
895
3,312
Maybe someone here can help me out a bit, I am pretty new to AI art/stable diffusion and learn pretty much by trial and error so far.
I mostly try to create pictures of superheroines, a lot of them have unusual skin colours, golden, green, blue etc..
I tried it with prompts like this:

{green_skin}

but it works irregularly, sometimes I get the specified colour on the costume instead, sometimes only parts of the skintone changes. Is there a specific way, with which I can be sure of the results? Or at least "surer?"
I tend to create a lot of superheroines, happy to see if there's anything I can do to help.
DM me with an example (containing prompts etc in the PNGInfo) and I'll take a look at it.

A good example here of even a pre-trained LoRA not really doing what it should, but being forced into it by a strengthened prompt:
With just "grey skin":
00111-1076702559.png

With "((grey skin))" [which BTW is the same as "(grey skin:1.2)"]:
00112-1076702559.png
 

botc76

The Crawling Chaos, Bringer of Strange Joy
Donor
Oct 23, 2016
4,421
13,198
I tend to create a lot of superheroines, happy to see if there's anything I can do to help.
DM me with an example (containing prompts etc in the PNGInfo) and I'll take a look at it.

A good example here of even a pre-trained LoRA not really doing what it should, but being forced into it by a strengthened prompt:
With just "grey skin":
View attachment 2715247

With "((grey skin))" [which BTW is the same as "(grey skin:1.2)"]:
View attachment 2715248
Is it important what kind of brackets/parentheses you use? Because I've seen quite a few prompt lists, where they only seem to use those {}
 
  • Like
Reactions: Jimwalrus

Jimwalrus

Active Member
Sep 15, 2021
895
3,312
Is it important what kind of brackets/parentheses you use? Because I've seen quite a few prompt lists, where they only seem to use those {}
Yes, it does!
() are the usual ones for emphasis in SD. {} is used in NovelAI as far as I know. They may be being interpreted correctly by SD, but it's likely not - hence why it's separating "green" and "skin" in "{green_skin}".
It doesn't recognise {green_skin} as a discrete token for any of its embeddings, so it tries to split it down until it gets to words it recognises. Only use dashes and underscores where you want to keep a string whole to use it as a token in its own right (i.e. in a Textual Inversion. Otherwise SD will just treat them like a space.

BTW, [] are de-emphasis.

There's a good technical summary .
 

me3

Member
Dec 31, 2016
316
708
My bad, I meant outfit in a slighly different context. I meant say I want to make a trainer-like game with inventory system where you can dress/undress a chara. The HS2 girls are lovely, but the SD girls are the next level - so I hope to find a workflow. Using HS2 it was rather trivial bunch of bitmaps layers up and down, sliced-and-diced various ways: .

And naturally, I think the inpainting with controlnet gets me there - I'd have a chara that I can switch outfits for.

Thanks for the links - time for those girls to put on a fashion show ;)
I've only had a quick read to catch up on the thread so i might have missed some important details etc, so this might already be suggested or dismissed already.
Since you're planing to use it in a game i'm assuming you already have a trained model in some format to keep the same character.
Depending on how many different outfits you want and your setup, you could potentially train the character with the different outfits so you could keep the character and the "outfit" fixed through various settings and poses. Same method could be applied to "rooms"/backgrounds as well, but there's the obvious question of how much work is "worth it" to spend on just prepping and the time would just be better spent dealing with the slight randomness in generating.

I doubt it'll be this easy, but considering you can merge loras, it'd been rather fun and useful if you could simple merge character and "clothing".
Highly doubt it'd put the character in the clothing, but would be a very good test for the AI "logic" and instructions:p