[Stable Diffusion] Prompt Sharing and Learning Thread

hkennereth

Member
Mar 3, 2019
237
775
Don't get me wrong I love using loras too. They are just very inconsistent. I think it's all about the use case scenario though. If you are happy with the result you get, that's all that matters. One thing doesn't exclude the other though. You can use both, or neither.
It's very difficult to get perfect likeness with a lora in my experience no matter how good it is. With a faceswap you get closer though. There is a different option that has not been mentioned and that is outpainting. Essentially only re generating the face from an image and outpainting the rest. You can just mask the face and select inpaint not masked. Have you tried to use a good lora and then use a faceswap over it? This would make use of the bone structure from the lora and get the better likeness from the faceswap. Essentially using the best from both. You can use real images with openpose etc for the different scenarios.
There are always new things to try. I think the fact that SD is not perfect is one of the things that keeps it interesting. If it were easy there wouldn't be any "sport"..
Yeah, I tried a few different methods, my current one actually involves making an initial image without the lora, but using IP Adapter for a basic likeness with SDXL models to get a first pass with better colors and composition (but lower facial fidelity), and then using img2img with ControlNet to recreate that visual with SD1.5 + Lora, to add the likeness back to the image. I don't usually like using faceswap methods after because I feel they tend to make facial expressions look too bland for my taste, though I can't say I have really tried ReActor yet. But at least in my personal experience, and for my needs I feel that loras give me higher fidelity than any other method I have tried while giving me full control over character pose and composition on the prompt, and I usually don't feel the need to add another step to increase that fidelity.

While I won't say that I never have issues getting a good resemblance when using loras, I feel that it's usually the result of a lora that didn't get trained with the best source images or just the random nature of Stable Diffusion, but I have some loras that to me are basically as good as it gets, so the inconsistency problems you describe, again in my experience are less due to an issue with limitations of the technology, and more a case of bad implementation (i.e. poorly trained loras).

For example, here are some generations using my custom models compared to photos of the real women trained in the loras:

fullres_dd_0007.jpg 1708910213956.png

fullres_dd_0001.jpg 1708910575679.png

fullres_dd_0012.jpg 1708910828772.png

fullres_dd_0006.jpg 1708911079470.png

midres_0025.jpg 1708911644678.png

So yeah, that's why I don't think loras are going anywhere. If you want to be able to create images from the prompt without being restricted to get a likeness from pre-existing photo, loras are a lot more flexible and lead to better quality than the alternatives... to me at least. But my quality bar for likeness is perhaps higher than the average user, so don't take my words as gospel.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Yeah, I tried a few different methods, my current one actually involves making an initial image without the lora, but using IP Adapter for a basic likeness with SDXL models to get a first pass with better colors and composition (but lower facial fidelity), and then using img2img with ControlNet to recreate that visual with SD1.5 + Lora, to add the likeness back to the image. I don't usually like using faceswap methods after because I feel they tend to make facial expressions look too bland for my taste, though I can't say I have really tried ReActor yet. But at least in my personal experience, and for my needs I feel that loras give me higher fidelity than any other method I have tried while giving me full control over character pose and composition on the prompt, and I usually don't feel the need to add another step to increase that fidelity.

While I won't say that I never have issues getting a good resemblance when using loras, I feel that it's usually the result of a lora that didn't get trained with the best source images or just the random nature of Stable Diffusion, but I have some loras that to me are basically as good as it gets, so the inconsistency problems you describe, again in my experience are less due to an issue with limitations of the technology, and more a case of bad implementation (i.e. poorly trained loras).

For example, here are some generations using my custom models compared to photos of the real women trained in the loras:

View attachment 3386539 View attachment 3386540

View attachment 3386541 View attachment 3386559

View attachment 3386569 View attachment 3386572

View attachment 3386573 View attachment 3386578

View attachment 3386579 View attachment 3386589

So yeah, that's why I don't think loras are going anywhere. If you want to be able to create images from the prompt without being restricted to get a likeness from pre-existing photo, loras are a lot more flexible and lead to better quality than the alternatives... to me at least. But my quality bar for likeness is perhaps higher than the average user, so don't take my words as gospel.
Very nice images. :) (y)
Yes you might very well be right that it's poorly trained loras and/or the general nature of SD that is the reason for inconsistencies, probably both. . An alternative that I already mentioned in a different post is faceswaplabs. It has some interesting features. Like reactor you can create a facemodel from a batch of images to use instead of a single input image, but unlike reactor it allows you to use an input image also with the facemodel, you can blend the facemodel with the input image and the generated image with a slider. So you can fine tune and balance the likeness more.
Ip adapter does get you there partially but it's not as accurate as a faceswap at least in my attempts.
It was along time ago I trained a lora and your conviction of it being superior inspires me to perhaps give it a go again. Good talk. (y)
 
  • Like
Reactions: hkennereth

JValkonian

Member
Nov 29, 2022
285
256
What method do you all use for aging people? I'm having a hard time setting ages. Everyone comes out young looking? Sometimes I want to have middle-aged or older people. But no matter if I enter 30yo or 90yo they come out the exact same.
I've even tried added "slight wrinkles" and some other wording.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,571
3,768
What method do you all use for aging people? I'm having a hard time setting ages. Everyone comes out young looking? Sometimes I want to have middle-aged or older people. But no matter if I enter 30yo or 90yo they come out the exact same.
I've even tried added "slight wrinkles" and some other wording.
What does your prompt look like? Would be best if you posted an image that contains your prompt. I am pretty sure you'd have reworked piece back in no time.
 

Thalies

New Member
Sep 24, 2017
13
50
What method do you all use for aging people? I'm having a hard time setting ages. Everyone comes out young looking? Sometimes I want to have middle-aged or older people. But no matter if I enter 30yo or 90yo they come out the exact same.
I've even tried added "slight wrinkles" and some other wording.
AI models tend to generate younger-looking faces by default, making it tricky to create older characters.

To depict older ages effectively, incorporate detailed descriptions that focus on features associated with aging, rather than just stating an age number. Use terms that hint at life experiences and the passage of time.

try a prompt like 1 woman in her mid-60s, with long silver hair gracefully framing her wise, smiling face, eyes that carry the twinkle of youth yet reflect the depth of experience, with laugh lines that tell a story of a life fully lived.

2024-02-26_17-30-28_3789.png
 

JValkonian

Member
Nov 29, 2022
285
256
AI models tend to generate younger-looking faces by default, making it tricky to create older characters.

To depict older ages effectively, incorporate detailed descriptions that focus on features associated with aging, rather than just stating an age number. Use terms that hint at life experiences and the passage of time.

try a prompt like 1 woman in her mid-60s, with long silver hair gracefully framing her wise, smiling face, eyes that carry the twinkle of youth yet reflect the depth of experience, with laugh lines that tell a story of a life fully lived.

View attachment 3388309
What a great shot! Yah I think I just have to be much more descriptive.
What if the LORA you are using is a younger model? Would it work in the prompts or to age them in img2img?
 

JValkonian

Member
Nov 29, 2022
285
256
Kinda put the age token first and "max" it out. If this is on the younger end of what you want, keep adding age and add things like "grandmother".

We kinda had a few posts about aging a few month back.
That's working much better thank you! Yah I have been trying to go through every message, sorry if things have been repeated. I'll try using the Search more extensively next time. Thank you all!
 
  • Like
Reactions: Sepheyer

me3

Member
Dec 31, 2016
316
708
This is a series of images using an increasing number of "steps" in an iterative upscaling. Upscaling is to 1.5 and steps/iterations is 3-8 (number in each filename), using a decreasing denoise. As an example first upscaling would be denoising of 0.5 and an upscale to 1.2 of original size, and the last would be 0.1 and 1.5 of original size with the steps inbetween being distributed equally. (not the actual values as i cba doing the math etc for all of it, so purely for representation.) Workflow is using the same type of setup i posted a while ago regarding this method.
You can see how the detailing increase and/or changes elements, but it generally doesn't add massively over done areas. There's some oddities that pop in, but they also get "fixed" in later images.

Might not be of too much interest, but there's boobs...which is usually a good thing...
it_3.jpg it_4.jpg it_5.jpg it_6.jpg it_7.jpg it_8.jpg

You don't have permission to view the spoiler content. Log in or register now.
 
Last edited:

rayminator

Engaged Member
Respected User
Sep 26, 2018
3,127
3,187
how to get the --xformers to work with the new version of AUTOMATIC1111 version v1.7.0
have tried using set COMMANDLINE_ARGS=--allow-code --xformers in the environment.bat but
nothing is working

run.bat
Echo off

call environment.bat

cd %~dp0webui
call webui-user.bat

environment.bat
Echo off

set DIR=%~dp0system
set COMMANDLINE_ARGS=--allow-code --xformers

set PATH=%DIR%\git\bin;%DIR%\python;%DIR%\python\Scripts;%PATH%
set PY_LIBS=%DIR%\python\Scripts\Lib;%DIR%\python\Scripts\Lib\site-packages
set PY_PIP=%DIR%\python\Scripts
set SKIP_VENV=1
set PIP_INSTALLER_LOCATION=%DIR%\python\get-pip.py
set TRANSFORMERS_CACHE=%DIR%\transformers-cache
 

me3

Member
Dec 31, 2016
316
708
how to get the --xformers to work with the new version of AUTOMATIC1111 version v1.7.0
have tried using set COMMANDLINE_ARGS=--allow-code --xformers in the environment.bat but
nothing is working

run.bat
Echo off

call environment.bat

cd %~dp0webui
call webui-user.bat

environment.bat
Echo off

set DIR=%~dp0system
set COMMANDLINE_ARGS=--allow-code --xformers

set PATH=%DIR%\git\bin;%DIR%\python;%DIR%\python\Scripts;%PATH%
set PY_LIBS=%DIR%\python\Scripts\Lib;%DIR%\python\Scripts\Lib\site-packages
set PY_PIP=%DIR%\python\Scripts
set SKIP_VENV=1
set PIP_INSTALLER_LOCATION=%DIR%\python\get-pip.py
set TRANSFORMERS_CACHE=%DIR%\transformers-cache
i believe it's not really respecting the launch option any more and you just use the setting in the webui so look for that.
Unless you're getting some kind of error, in which case we'd need to know what that error is/says
 
  • Like
Reactions: rayminator

rayminator

Engaged Member
Respected User
Sep 26, 2018
3,127
3,187
i believe it's not really respecting the launch option any more and you just use the setting in the webui so look for that.
Unless you're getting some kind of error, in which case we'd need to know what that error is/says
thanks for the help
but couldn't find if someone knows where the setting is please help