[Stable Diffusion] Prompt Sharing and Learning Thread

yomimmo

New Member
Nov 18, 2019
6
17
Can you post the screenshot of the error?
Everything starts fine, but when I select 'Generate' after typing the prompts in the webui, I get this error message: 'ValueError: Query/Key/Value should all have the same dtype query.dtype: torch.float32 key.dtype : torch.float32 value.dtype: torch.float16'. I only get this error when I put the '--xformers' parameter in the .bat
 
  • Like
Reactions: Sepheyer

yomimmo

New Member
Nov 18, 2019
6
17
by the way you need an Nvidia GPU for xformers. It must take advantage of their deep learning thing afaik
Yeah, I have a GTX 1060 3Gb, I should be able to use that option with no problem. I have tried removing the '--medvram' option and leaving only the '--xformers' and the 'opt split' option and it keeps giving me the same error when generating the images.
 
  • Like
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,528
3,598
Everything starts fine, but when I select 'Generate' after typing the prompts in the webui, I get this error message: 'ValueError: Query/Key/Value should all have the same dtype query.dtype: torch.float32 key.dtype : torch.float32 value.dtype: torch.float16'. I only get this error when I put the '--xformers' parameter in the .bat
You might have seen this thread:



some folks have a similar issue, not sure if it is the same issue. Looks like some are fixing it by CUDA driver update (I have zero idea myself, merely repeating a post towards the end of the thread).
 
  • Like
Reactions: Mr-Fox and yomimmo

Jimwalrus

Active Member
Sep 15, 2021
902
3,339
Is there a video tutorial
How do I create these images?
is good for the standard SD 1.5 webui install (you will need to pause it many, many times!).
Personally I'd recommend SD webui over Automatic1111 as there are issues with that at the moment.
NMKD is a bit too simplified, limits you a bit.
N.B. don't make the mistake I made and install the newest version of Python. SD only works with 3.10.x

Also, you'll need an Nvidia GPU and a fairly decent one* - the best affordable option is an RTX 3060 12GB (~$340). You want as much vRAM as possible, all other specs are pretty much irrelevant!
AMD GPUs can be used, but I hope your Python's good...

For videos on creating images, anything by YouTuber is best.

Get yourself an account on Civitai (or sign in with a Discord acct) to download models, embeddings etc.

Don't touch SD2.1 unless you want to be stuck with SFW for the time being - there are community efforts to counteract the filters, but for the time being it's really difficult to get round. 1.5 is still excellent and is where 95% of us are anyway.


*I've had it running on a 2GB GTX 960 but it's functionally very, very limited.

Even 4GB is really below easily useable levels, you won't be able to do much more than 512x512 resolution and Restore Faces is hit and miss.

8GB is where it gets useable but limits the resolution a bit in upscaling. You can really start to do some Textual Inversion training though.

12GB allows Dreambooth /LoRA training which are gamechangers - much faster than TI and require fewer starting images.

24GB means super-high res images, very high speed training and batch production of images (several at once). It also means you've got more money than sense; can I borrow your GPU for a few days?


Please let us know how you get on.
 

fr34ky

Active Member
Oct 29, 2017
812
2,167
Call the zoo, their cougars escaped.

View attachment 2400043
Hey guys, answering my own question about using different tabs, etc. I've just discovered how to use 'styles' which allows you to save infinite presets of your prompts, I tried it before but didn't figure out. I hope I'm not too late bringing this info...

As seen in the picture:

1) press save and name the style to save it (duh)
2) choose the style you want to use
3) send the style to the prompt boxes

Important detail, when you reload your WebUI (at least in my case) styles don't appear, you have to press the refresh blue button.

Edit: Forgot to attach pic XD styles.jpg
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Checkpoint Compared part 2. Three more checkpoints included. Prompt by at Civitai BoobA-V1 and edited by me and also added braless Lora.
Prompt: underboob croptop, (braless), solo, from below, looking at viewer, brown hair, naval ring, mole, lips, brown eyes, long hair, (huge breasts), jewelry, earrings, indoors, arm up, looking down, ceiling, upper body
You don't have permission to view the spoiler content. Log in or register now.

Checkpoints Compared2.png
This prompt was not as consistent as the first one but I think it shows of the difference better, potentially.
My conclusion is that there are many good checkpoints that is interchangeable and it comes down do preference and
style choice.
There is also some really bad ones to be avoided such as subredditV3 that doesn't generate anything good no matter what.

*Edit.
Apparently there is a use for SubbredditV3. According to one user it's good for inpainting realistic genitalia, IOW generate with your favorite checkpoint and then use subredditV3 to generate a more realistic genitalia with inpainting.
 
Last edited:

fr34ky

Active Member
Oct 29, 2017
812
2,167
Looks like higher resolutions is the key for doing good faces without 'restoring faces', problem is pictures take huge amount of time if you have an 8gb like me... Higher resolutions look like a next level stuff for getting those crazy quality images you see in some places.
 

fr34ky

Active Member
Oct 29, 2017
812
2,167
Today I learned about inpaiting, non restored faces, picture resolution and some other stuff, and tested with an old 'problem' of the thread.

To improve the faces on 2 persons you just have to inpaint each face at once, I remember that was my answer in the moment but never tried it. To make good inpaintings, you can follow this little tutorial, note that there is no face restoration used.



For more detail and control when inpainting you can put the picture in photoshop and create the mask there and then put the picture back to inpaint, this is mostly needed in this picture cause faces look very little, I just did it with the normal inpaint because it's just an excercise.

You don't have permission to view the spoiler content. Log in or register now.

This was made with the HASDX model, which probably wasn't the most suited for this picture, since I was just practicing I didn't care about that.

00622-2638084647-(a slutty british brunette woman surprised and smiling with sunlit face), Rea...png
 
Last edited:

Divine Cyberman

New Member
Feb 17, 2023
13
51
Today I learned about inpaiting, non restored faces, picture resolution and some other stuff, and tested with an old 'problem' of the thread.

To improve the faces on 2 persons you just have to inpaint each face at once, I remember that was my answer in the moment but never tried it. To make good inpaintings, you can follow this little tutorial, note that there is no face restoration used.



For more detail and control when inpainting you can put the picture in photoshop and create the mask there and then put the picture back to inpaint, this is mostly needed in this picture cause faces look very little, I just did it with the normal inpaint because it's just an excercise.

You don't have permission to view the spoiler content. Log in or register now.

This was made with the HASDX model, which probably wasn't the most suited for this picture, since I was just practicing I didn't care about that.

View attachment 2400624
I've been looking into this as well and also just learned that inpainting after upscalling gives better results. Another thing that I just found out is that there are a lot of models that provide a separate "inpainting version" that you can use to get better results. URPM, the AbyssOrange mixes and Dreamshaper all have this feature.
 

HardcoreCuddler

Engaged Member
Aug 4, 2020
2,395
3,072
Yeah, I have a GTX 1060 3Gb, I should be able to use that option with no problem. I have tried removing the '--medvram' option and leaving only the '--xformers' and the 'opt split' option and it keeps giving me the same error when generating the images.
by the way also make sure you're running the AI using your 1060 if you have an integrated one as well
simplest way to make sure of that is to go into your geforce control panel and add the entire folder under program settings, then set it to use the 1060
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Checkpoint Compared Part3. Prompt by DaringDivas, civitai DucHaitenAIart comment section.
Prompt:

Hasselblad award winning extreme close-up photograph of a beautiful coy female skinny blonde fitness model wearing an elegant bias-cut silk chiffon dress sitting, (((worlds largest GGGG breasts))), plunging cleavage, huge necklace, oiled skin, athletic abs, muscular arms, wide shoulders, beautiful face, high cheek-bones, small chin, heavy make-up, smoky eyes, detailed eyes, low body-fat, cinematic lighting, setting is a moody cocktail bar.

I used Heun sample method instead of LMS that the OP used.

You don't have permission to view the spoiler content. Log in or register now.

Checkpoints Compared3b.png
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Pogo123
Pm'ed me this:

"i saw you are quite active here and you have quite a big of knowledge regarding ai art.

i might be to silly to understand all the progress but i would really like to and i hope maybe you could help me a bit...

alway when i try to write prompts i get errors. when i use prompts of guys who share theirs i get errors too - could you maybe tell me why?

also so many use stable diff but when i go stable diff free and write positive and negative prompts i get just errors...

also i saw on civitai that ppl use more porgrams? do i need other programs to download to use stable diff and other ai programs effectively?

i am so confused and frustrated... maybe its cause i never really was an IT guy maybe its that or i am really too silly.

would be really nice if you could help me out - at least a bit.

greetings "

Let's help him.

My reply:
"No you are not silly. If it weren't for all the good tutorial videos on youtube I would be completely lost. If you get errors, it might be a punctuation issue or it's an install issue. Only guessing. Often when you get an error it usually is specifying what kind of error it is in the cmd window. Your solution depends on what kind of error it is. About more "programs".
It is UI's and extensions and additions such as "Lora's". Don't worry about those things yet until you figure out the basics. The two YT channels I watch most is and . Sebastians Ultimate Guide is a good place to start.



Maybe you installed the latest python and it is causing problems for you, I've heard other's having this issue. I simply followed Sebastians guide and had no issues with the install.
Having a decent computer is also something to consider. Mine is not a monster rig but it's very capable. Overclocked GTX1070 and i7 CPU from 2017. Vram is the big thing for ai image generation. Watch the guide video and see if you have followed all the steps in the installation. Then try to make a generation of a simple image. Then lets take it from there.
Don't hesitate to ask for help in the SD thread, this is what it's for. The more heads solving a problem the better.
https://f95zone.to/threads/stable-diffusion-prompt-sharing-and-learning-thread.146036/page-7 ".
 

fr34ky

Active Member
Oct 29, 2017
812
2,167
Checkpoint Compared Part3. Prompt by DaringDivas, civitai DucHaitenAIart comment section.
Prompt:

Hasselblad award winning extreme close-up photograph of a beautiful coy female skinny blonde fitness model wearing an elegant bias-cut silk chiffon dress sitting, (((worlds largest GGGG breasts))), plunging cleavage, huge necklace, oiled skin, athletic abs, muscular arms, wide shoulders, beautiful face, high cheek-bones, small chin, heavy make-up, smoky eyes, detailed eyes, low body-fat, cinematic lighting, setting is a moody cocktail bar.

I used Heun sample method instead of LMS that the OP used.

You don't have permission to view the spoiler content. Log in or register now.

View attachment 2401846
I've been using clarity and deliberance, they are booth great models and into my favorites:

PS: be careful with that subreddit model here
 
  • Like
Reactions: Mr-Fox and Sepheyer

yomimmo

New Member
Nov 18, 2019
6
17
Hi Prompters. I wanted to tell you that yesterday, after several hours dedicated to finding and applying solutions and configurations to be able to activate the xformers, it seems that everything is going well at last, although the truth is, I don't notice an increase in performance worthy of such hard work to activate them in my rig. Maybe they are not really activated, but in the cmd at least it no longer gives me an error.
Well, what I actually wanted to share with you is a new discovery that I've been testing and it's great: ControlNet. It is a script that allows you to insert an image that shows a character in the pose you want, copy the pose and apply it magically to the image generated at your prompt. Here I leave to this wonderful community a tutorial and the link to download the different models. The video explains the steps perfectly. Happy posing!

Models (extracted, not necesary +5Gbs like the tuto):



Tuto:

 

Jimwalrus

Active Member
Sep 15, 2021
902
3,339
I'm supposed to be staying away from SD for a bit, get some work done so I don't get fired.
In my defence, I've got it running in the background on my personal PC while I wrestle with MS Visio...

Anyhoo, there's a new Extension available called 'Dynamic Thresholding CFG Scale Fix' which could well be a gamechanger as it greatly reduces the percentage of unwanted images.

Specifically it prevents that weird high-contrast 'burn-in' effect from very high CFGs, but still allows you to really 'force' a prompt, run high numbers of Steps etc. like a truly high CFG would.

All prompts etc. are in the PNGInfo.

Without Dynamic Thresholding, CFG 7.5:
00027-3652737871.png

Without Dynamic Thresholding, CFG 25:
00025-3652737871.png

With Dynamic Thresholding, CFG 25 "Mimicking 7.5": 00009-3652737871.png
 
Last edited:

fr34ky

Active Member
Oct 29, 2017
812
2,167
I've been looking into this as well and also just learned that inpainting after upscalling gives better results. Another thing that I just found out is that there are a lot of models that provide a separate "inpainting version" that you can use to get better results. URPM, the AbyssOrange mixes and Dreamshaper all have this feature.
I've just seen a video from Sebastian Kamph that says that it's not always good to inpaint after the upscale, funny because I learned to do that from him before, he doesn't say when it's good and when it's not good to do it. There is a lot of experimentation going on it seems.