[Stable Diffusion] Prompt Sharing and Learning Thread

Feb 17, 2023
17
56
A short while ago fr34ky talked about not using the restore face function. I just learned about VAE.
Variational autoencoders (VAEs).

Why do we need VAE?

Variational autoencoders (VAEs) are a deep learning technique for learning latent representations. They have also been used to draw images, achieve state-of-the-art results in semi-supervised learning, as well as interpolate between sentences.

SD 1.5 comes with a VAE but there is a new improved verison.

What does it do?

It gives better and sharper details such as eyes and face, faster.

Get it here:

press download.. Place it inside "Stable-Diffusion\stable-diffusion-webui\models\VAE".

Go to settings, Stabble Diffusion.
Press Reload SD VAE.
View attachment 2409486
And choose "vae-ft-mse-840000-ema-pruned.safetensors".
Also "Ignore Selected VAE for checkpoints that has their own" seems like a good idea, it was checked by default but if it isn't make sure it is.
Press "Apply settings".

So does this mean we don't need to use "Face Restoration" anymore? Let's try it and see. Try with and without, also try the different types and with different settings.
In settings, "Face Restoration" you can choose either to use "codeformer" or "GFPGAN" and how much it is being used.
They give a slightly different result, so try both.
" CodeFormer weight parameter; 0 = maximum effect; 1 = minimum effect " so to decrease the effect increase the number.
Then there is postprocessing as an option, here codeformer is also being used. Yeah I know a bit confusing because the same name.
In settings , "Postprocessing" you can "Enable postprocessing operations in txt2img and img2img tabs ".
You can choose between again "codeformer" or "GFPGAN". Or both..
It will show up in the UI for txt2img and img2img.
I'm using codeformer at the moment with Visibility:1 and Weight:0,4 while testing.
I'm not an expert, only learning as I move along.
Fortunately VAEs are already baked in a lot of models, such as the latest release of URPM. Also if you set a VAE manually in the dropdown settings and the model already has one pre-baked it should overwrite the one that was baked into the model, which might or might not be desirable (depending on what you want) so I usually leave it at automatic unless I know that the model does not have a pre-baked VAE or if I want to try a different VAE to get different results. VAEs do seem to help with faces, especially if they look melted or have strange magenta colors but not a huge difference.
 
  • Like
Reactions: Mr-Fox and Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
Fortunately VAEs are already baked in a lot of models, such as the latest release of URPM. Also if you set a VAE manually in the dropdown settings and the model already has one pre-baked it should overwrite the one that was baked into the model, which might or might not be desirable (depending on what you want) so I usually leave it at automatic unless I know that the model does not have a pre-baked VAE or if I want to try a different VAE to get different results. VAEs do seem to help with faces, especially if they look melted or have strange magenta colors but not a huge difference.
Read my post again. In the settings check " Ignore selected VAE for stable diffusion checkpoints that have their own .vae.pt next to them ". ;) The ones that have VAE usually has it in the name and if it doesn't it is very easy to just change the name and add it yourself..
 
Last edited:
  • Like
Reactions: Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
FML. Her left eye makes me want to set my hair on fire.

Would anyone have a fix for this particular image?

Naturally there are good ones in the mix too:
You don't have permission to view the spoiler content. Log in or register now.
But I wonder about fixing something that you are absolutely in love with:

View attachment 2409802
You don't have permission to view the spoiler content. Log in or register now.
Gorgeous. :love:
Have you installed the new VAE? Also did you try the different options for Face Restoration and also different settings?
Have you tried to use postprocessing? I posted about inpainitng to fix eyes a while ago.

Then photoshop is always an option if all else fails.
 
  • Heart
Reactions: Sepheyer
Feb 17, 2023
17
56
Read my post again. In the settings check " Ignore selected VAE for stable diffusion checkpoints that have their own .vae.pt next to them ". ;) The ones that have VAE usually has it in the name and if it doesn't it is very easy to just change the name and add it yourself..
Maybe it was not clear what I meant. This option checkbox in your post refers to a setting for the old method of using VAE's which is to rename files checkpoint and VAE (so they have the same name) and then put them in the same folder next to each other. Now you don't have to do that, simply put the VAE files in the VAE folder and chose the one you want from the dropdown in settings (so that checkbox is a bit obsolete). What I was referring to is the fact that some newer/updated checkpoints now have VAEs already baked/included into the model itself so there is often no need to change any setting or chose a VAE in the dropdown.
 

deltami

Newbie
Dec 19, 2017
55
44
1. notice Controlnet not working well with some checkpoint mergered far away from sd1.5 , like URPM. openpose is worst, the facial is completely melt. hed is better but still need to manually adjust the hed-output to clear up reduclent details. is there better controlnet model to be used instead? I would prefer better openpose so I don't need to bring blender into the process.

2. i know instruct-pix2pix is baked in like webui. I think it has it own imprint model, right? if we using URPMish, we would be better to transfer merge a new imprint model for the instruct?
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
Maybe it was not clear what I meant. This option checkbox in your post refers to a setting for the old method of using VAE's which is to rename files checkpoint and VAE (so they have the same name) and then put them in the same folder next to each other. Now you don't have to do that, simply put the VAE files in the VAE folder and chose the one you want from the dropdown in settings (so that checkbox is a bit obsolete). What I was referring to is the fact that some newer/updated checkpoints now have VAEs already baked/included into the model itself so there is often no need to change any setting or chose a VAE in the dropdown.
I see. Clear as soup.. :LOL: I confess that I had to read this several times in the morning before getting it.:rolleyes::ROFLMAO: I was not aware of this, that the description for the checkbox was referring to a checkpoint with a VAE file beside it. I had seen checkpoints with VAE in their name, i.e beside the checkpoint and got confused about this. Thought it meant if the checkpoint has VAE in the name SD would then not use the internal VAE.
Sometimes.. I wonder if my brain is made of concrete.. Absolutely dense..:oops::LOL:
 
Last edited:
  • Like
Reactions: Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
A perfect waifu for sure, absolutely georgeus. SD has often difficulty with side viewing eyes I have noticed. Nailed it for this one though.
 
  • Heart
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,778
A perfect waifu for sure, absolutely georgeus. SD has often difficulty with side viewing eyes I have noticed. Nailed it for this one though.
I can only imagine how out of this world things will become once you have a proper scene composing capability with all the drill-down aspects. I could really waive my perverted flag on an on then.

Seriously tho, this comment on deviant got me thinking: "I love the panty-line tattoo..."

This is for the image below. I didn't even notice there was a tattoo - I thought a leaf got fused. And then I paused to think how sweet it would be for us to drill down to such minor details and change texture, patches, fragments, etc., etc. Literally, my Honey Select 2 freak got loose. Probably LORAs or their successors is what's gonna get us there. Keeping fingers crossed.

dfptcl4-07c50f7c-d907-410b-a7e9-e41c3e696ca3.png
You don't have permission to view the spoiler content. Log in or register now.
 
  • Heart
Reactions: sharlotte

fr34ky

Active Member
Oct 29, 2017
812
2,191
Article about fixing extra or deformed limbs etc with inpainting.



Example1

Before
View attachment 2412235
After
View attachment 2412238

Example2

Before
View attachment 2412244

After
View attachment 2412245
Looks very promising, sometimes the best stuff gets ruined by extra limbs or fingers, on my to do list now, I hope there is a video about that, I mostly watch Sebastian Kamph workflow videos, he's my favorite of all the AI youtubers.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
Article about the workflow to create great looking art.

The steps in this workflow are:

  1. Build a base prompt.
  2. Choose a model.
  3. Refinement prompt and generate image with good composition.
  4. Fix defects with inpainting.
  5. Upscale the image.
  6. Final adjustment with photo-editing software.
Example Image:
1677183302903.png
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
Looks very promising, sometimes the best stuff gets ruined by extra limbs or fingers, on my to do list now, I hope there is a video about that, I mostly watch Sebastian Kamph workflow videos, he's my favorite of all the AI youtubers.


 
  • Like
Reactions: Sepheyer and fr34ky

fr34ky

Active Member
Oct 29, 2017
812
2,191


I just watched the video about hand inpainting, she's generating like 100 iterations of completely different results to get the hands right, the results went from 1 finger to 5 fingers to parts of a banjo... I'm not sure that is the best method, looks more like betting and praying :ROFLMAO:, most guys that know what they are doing have a very precise method to get the result in a couple of generations, I think that video being 2 months old and with how fast the techniques are moving is probably completely aged.
 
  • Haha
  • Like
Reactions: Mr-Fox and Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,778
I just watched the video about hand inpainting, she's generating like 100 iterations of completely different results to get the hands right, the results went from 1 finger to 5 fingers to parts of a banjo... I'm not sure that is the best method, looks more like betting and praying :ROFLMAO:, most guys that know what they are doing have a very precise method to get the result in a couple of generations, I think that video being 2 months old and with how fast the techniques are moving is probably completely aged.
I am somewhat hanging my hopes on the Control Net - that within a few month it will replace a bunch of current approaches/subtechs.
 
  • Like
Reactions: yomimmo and Mr-Fox

fr34ky

Active Member
Oct 29, 2017
812
2,191
I am somewhat hanging my hopes on the Control Net - that within a few month it will replace a bunch of current approaches/subtechs.
Totally, I think some controlnet technique will help fixing most of this deformities, I'm doing some tests mixing control net with inpaint but with not good results at the moment.
 
  • Like
Reactions: Mr-Fox and Sepheyer
Feb 17, 2023
17
56
I just watched the video about hand inpainting, she's generating like 100 iterations of completely different results to get the hands right, the results went from 1 finger to 5 fingers to parts of a banjo... I'm not sure that is the best method, looks more like betting and praying :ROFLMAO:, most guys that know what they are doing have a very precise method to get the result in a couple of generations, I think that video being 2 months old and with how fast the techniques are moving is probably completely aged.
That's the so-called gacha method, I still use this when I'm feeling lazy lol. One convoluted workflow that I find that works almost 100% of the time is to use DAZ3d or Blender to render a hand that has roughly the same skin tone, angle, pose and size with a transparent background (doesn't need to be perfect). Or just draw a sketch of the hand if you can do that. Then you crop it and then superimpose it over the gen you want to fix using image editing software and then fuse the 2 using inpainting and img2img with low denoise setting.