A short while ago fr34ky talked about not using the restore face function. I just learned about VAE.
Variational autoencoders (VAEs).
Why do we need VAE?
Variational autoencoders (VAEs) are
a deep learning technique for learning latent representations. They have also been used to draw images, achieve state-of-the-art results in semi-supervised learning, as well as interpolate between sentences.
SD 1.5 comes with a VAE but there is a new improved verison.
What does it do?
It gives better and sharper details such as eyes and face, faster.
Get it here:
You must be registered to see the links
press download.. Place it inside "Stable-Diffusion\stable-diffusion-webui\models\VAE".
Go to settings, Stabble Diffusion.
Press Reload SD VAE.
View attachment 2409486
And choose "vae-ft-mse-840000-ema-pruned.safetensors".
Also "Ignore Selected VAE for checkpoints that has their own" seems like a good idea, it was checked by default but if it isn't make sure it is.
Press "Apply settings".
So does this mean we don't need to use "Face Restoration" anymore? Let's try it and see. Try with and without, also try the different types and with different settings.
In settings, "Face Restoration" you can choose either to use "codeformer" or "GFPGAN" and how much it is being used.
They give a slightly different result, so try both.
" CodeFormer weight parameter; 0 = maximum effect; 1 = minimum effect " so to decrease the effect increase the number.
Then there is postprocessing as an option, here codeformer is also being used. Yeah I know a bit confusing because the same name.
In settings , "Postprocessing" you can "Enable postprocessing operations in txt2img and img2img tabs ".
You can choose between again "codeformer" or "GFPGAN". Or both..
It will show up in the UI for txt2img and img2img.
I'm using codeformer at the moment with Visibility:1 and Weight:0,4 while testing.
I'm not an expert, only learning as I move along.