[Stable Diffusion] Prompt Sharing and Learning Thread

devilkkw

Member
Mar 17, 2021
305
1,039
I guess a bit of both but technically it's a character, .

View attachment 2487597
sorry for double post. i want to see some result on your lora test, when you ready.


I want to fix my hands :cry:
Sometimes triggering artist do better result's on hands, and if you have many artist in your prompt, you need to check inspect and prune prompt.
Also negative prompt: start without any negative to understand if prompt is good, then adding negative and keep generation every parameter you put in or stripe out.
 
Last edited:
  • Hey there
Reactions: Mr-Fox
Oct 19, 2019
42
117
sorry for double post. i want to see some result on your lora test, when you ready.



Sometimes triggering artist do better result's on hands, and if you have many artist in your prompt, you need to check inspect and prune prompt.
Also negative prompt: start without any negative to understand if prompt is good, then adding negative and keep generation every parameter you put in or stripe out.
Thanks, I'm trying something like "bad hands, disfigured fingers, bad fingers, deformed, disfigured, fused fingers"
I don't know any artist for example, but I will try it.

Maybe krenz cushart, fkey, shal. e

It seems controlnet caused my weird fingers.
 
Last edited:

devilkkw

Member
Mar 17, 2021
305
1,039
You have to experimenting and find what work better for model you are using.

This is an example of starting prompt without triggering any artist and negative i use for every generation
You don't have permission to view the spoiler content. Log in or register now.
akkw.png

Waste the result, this is just a starting point, just try adding somethink like artist and see how it change, make sure artist is at the end of prompt, to not losing composition.

You don't have permission to view the spoiler content. Log in or register now.

You don't have permission to view the spoiler content. Log in or register now.

The artist name is in image name. You can mix artist, but remember to keep them at the end of prompt.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Thanks, I'm trying something like "bad hands, disfigured fingers, bad fingers, deformed, disfigured, fused fingers"
I don't know any artist for example, but I will try it.

Maybe krenz cushart, fkey, shal. e

It seems controlnet caused my weird fingers.
 

modine2021

Member
May 20, 2021
362
1,165
don't know what happend. was enjoying myself but now nothing happened but this message even after setting back to default values:


NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Time taken: 15.92s
Torch active/reserved: 3242/3676 MiB, Sys VRAM: 5237/8192 MiB (63.93%)
 

sharlotte

Member
Jan 10, 2019
268
1,436
Modine2021, there's a long thread on git on this;
people tried different things to get rid of the error. I'll let you read and try to fix - good luck!
 

modine2021

Member
May 20, 2021
362
1,165
Modine2021, there's a long thread on git on this;
people tried different things to get rid of the error. I'll let you read and try to fix - good luck!
oh i fixed it already after my other post (y)(y):) ...i fixed it by editing the webui-user.bat file:

set PYTHON="C:\Users\SomeBody\AppData\Local\Programs\Python\Python310\Python.exe"
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--xformers --no-half-vae --disable-nan-check
git pull
call webui.bat
save all
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,528
3,598
WTF Is the Latent Space?

Latent space is the storyboard. So, it goes like this. You have a script that you want to make into film.

You slice and dice the script into scenes and how you want them filmed. The document that captures this transition is a storyboard. Then, from that storyboard the film crew and the film machinery do the capture that eventually gets edited and produced into a film.

We just went: script -> [encode] -> sctoryboard -> [decode] -> [film]. IRL the encoder is the director and the decoder are the film crew and everything in the postproduction (plus beancounters, studio, etc.).

With stable diffusion the encoder are your samplers (i.e. Euler A, Heun, etc.) and the decoder are your VAE (yes, I know VAE is encode/decode, but fuck it).

Script (Prompt)
Storyboard (Latent)
Film (Clipspace/Pixelspace)
script.png
SWSBs_OriginalTrilogy_p211A.jpg
5096375693869056.jpg

The reason why we even have latent space is because it is much cheaper and faster to manipulate. The efforts to manipulate the latent space are one-millionth of what the pixelspace costs to manipulate. Exactly the same reason why we have storyboards.

That's how I map "latent space". Would you have a better example?
 
Last edited:
Oct 19, 2019
42
117
Can you give us an update how it went with fixing the hands? I'm curious.
So far, I'm more careful with the poses I select with controlnet to avoid weird hands. Also making more variations or inpaint to get better hands. Finally, if it still bad I just draw better hands lol

I did download the hand depth maps and I will try it. But it seems like a lot of work, then I can almost just draw them myself.
 
Last edited:
  • Like
Reactions: Mr-Fox