[Stable Diffusion] Prompt Sharing and Learning Thread

me3

Member
Dec 31, 2016
316
708
When you're working with 2gb vram and forget to fix/set your prompts and walk away because generating is predicted to take a long time...
Happy accidents i guess (just don't start counting fingers, nipples etc, get distracted by other things...)
1912145736.png
 

Jimwalrus

Active Member
Sep 15, 2021
930
3,423
When you're working with 2gb vram and forget to fix/set your prompts and walk away because generating is predicted to take a long time...
Happy accidents i guess (just don't start counting fingers, nipples etc, get distracted by other things...)
View attachment 2641298
That's very impressive for being done in 2GB of vRAM. I couldn't manage anything like that with my old GTX960!
 
  • Like
Reactions: devilkkw

me3

Member
Dec 31, 2016
316
708
People are probably already aware, but i can't remember seeing it mentioned in the thread already.
By taking advantage of when elements/subjects are "removed" from a prompt or "transitioning from one to another you can create some nice effects. Just some examples and i tried to keep the prompts fair short and clean. What result you get will change a lot depending on seed so not sure i'd call it predictable, and the actual second element/subject might mean a lot less than you initially think, IE the simple green grass in the 3rd image.
Skull (1).png
Skull (2).png Skull (3).png

(Edited to add some more examples)
ex (1).png
ex (2).png
 
Last edited:
  • Thinking Face
  • Like
Reactions: Mr-Fox and devilkkw

qazeqaze

Newbie
Jul 30, 2022
90
255
Have been messing about with:
Checkpoint:
VAE:
Upscaler:
Extensions:

+

+


Positive prompt:
Beautiful male, __portrait-type__, __artist-anime____hair-color__, __eyecolor__, __clothing-male__, __headwear-male__, (masterpiece, best quality, high quality, highres, ultra-detailed), __forest-type__, __flower__

Beautiful female, __portrait-type__, __artist-anime____hair-color__, __eyecolor__, __clothing-female__, __headwear-female__, (masterpiece, best quality, high quality, highres, ultra-detailed), __forest-type__, __flower__

Negative prompt:
bad_prompt_version2, bad-artist-anime, bad-hands-5, bad-image-v2,

View attachment 2635999


View attachment 2636000

View attachment 2636004
View attachment 2636033
That space/water image is amazing. Really, really like it. Nice work.
 
  • Like
Reactions: DownloadAllTheGames

me3

Member
Dec 31, 2016
316
708
Revisiting an old prompt. This time I'm using my own .

View attachment 2646269
judging by your prompts you seem to have been trying to fight the pretty hard to win battle of models generally just thinking of "bikini" as one thing and not having any real concept of "variants" :(
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
Revisiting an old prompt. This time I'm using my own .

View attachment 2646269
judging by your prompts you seem to have been trying to fight the pretty hard to win battle of models generally just thinking of "bikini" as one thing and not having any real concept of "variants" :(
That's very common with SD.
You kinda have to use a hammer and hit it over the head repeatedly to get the result you want.. :LOL:
SD doesn't know anything it hasn't been trained for. How would you even name or categorize different styles of bikinis in the first place. With SD your best bet is to try describe it instead of naming it. What ever "it" is.. With this prompt I was trying to get a small bikini, both bottom and top. The problem is that the breast size is not being differentiated from the size of the top.
With a huge rack you get huge bra or bikini top too. So I have tried to find a way to describe it. I used undersized bikini, it didn't work as intended or hopped for. I guess I'll have to include it in the dataset next time I make a Lora.
 
Last edited:
  • Like
Reactions: Jimwalrus

me3

Member
Dec 31, 2016
316
708
yeah i've been clearing out thousands of images from when i was testing what kind of "common" naming/sizing/styling that would work.
Considering how much clothing/fashion images there are online it was a bit surprising how many basic concept hasn't been picked up. Just a basic/simple thing as using bra sizes would have been very effective and, you'd think, easy to do considering all the existing images from models, clothing stores, fashion magazines etc. Hairstyles is a pretty large hit and miss too, even basic hair lengths.

They hould have spent less time pissing of artists and more time looking at boobs and head shots :giggle:
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
yeah i've been clearing out thousands of images from when i was testing what kind of "common" naming/sizing/styling that would work.
Considering how much clothing/fashion images there are online it was a bit surprising how many basic concept hasn't been picked up. Just a basic/simple thing as using bra sizes would have been very effective and, you'd think, easy to do considering all the existing images from models, clothing stores, fashion magazines etc. Hairstyles is a pretty large hit and miss too, even basic hair lengths.

They hould have spent less time pissing of artists and more time looking at boobs and head shots :giggle:
Yeah I agree. We should all just look more at boobs.. :love: :D
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
Experiments with controlnet:

Before:

View attachment 2650511

After: (why do mine always look like they have tired eyebag eyes?)
View attachment 2650511
I suspect that the puffy eye has to do with the source material the checkpoint and or lora etc has been trained on.
You can try put puffy eyes or puffy eyelids in negative with 1.2 weight or more if needed.
Example:
(puffy eyes:1.2) or (puffy eyelids:1.2) .
 
  • Like
Reactions: miaouxtoo

devilkkw

Member
Mar 17, 2021
308
1,053
a1111 have done a new update (v1.3.0).
in this update we have Cross attention optimization.
I've made a test with all of this .
Setting's and time in post.

Are you using it? what are your favorite?
 

raventai

New Member
Jan 15, 2018
14
18
Sorry to interfere as a humble SD novice (and already baffled by all this body horror AI thing...), the first post (OP front page) is misleading regarding the way to implement LoRas in SD, I have lost two days trying to reconcile path problems and extension calls because of it. There is no need to use ui additional networks extension, Loras are directly supported (or git pull is your friend) and it is a breeze... . Thanks for all the effort however, it is interesting to educate oneself but I think DAZ is still far more efficient when you have precise Renpy needs.... (and we have those monster quasiNASA rigs....).
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
Sorry to interfere as a humble SD novice (and already baffled by all this body horror AI thing...), the first post (OP front page) is misleading regarding the way to implement LoRas in SD, I have lost two days trying to reconcile path problems and extension calls because of it. There is no need to use ui additional networks extension, Loras are directly supported (or git pull is your friend) and it is a breeze... . Thanks for all the effort however, it is interesting to educate oneself but I think DAZ is still far more efficient when you have precise Renpy needs.... (and we have those monster quasiNASA rigs....).
Yes I agree, Daz is more consistent because you have direct control while SD is always a dice toss however SD is light years ahead in visuals and realism. Though using controllnet and openpose SD is catching up to DAZ in repeatability and consistency. Also with SD you are not forced to endure endless menus only to tweak one little thing..