[Stable Diffusion] Prompt Sharing and Learning Thread

miaouxtoo

Newbie
Mar 11, 2023
46
132
I believe this image proves my point that Hires fix is superior to any simple upscaling.
It's the same seed as before with the same settings. I have made som small additions to the prompt though.
Other than this it's only the use of Hires fix that is making the difference. With some further work this could be a really nice one.
I've been reading that with SD in the current state, there's no real alternative to the quality of using the original prompt + model etc + hires fix.

Using img2img in the current form isn't going to be as good because it's trying ... but it doesn't create in the same way that the original prompt does.

It probably doesn't matter for some types of creatives (abstract, blocks of color, simple anime etc.), but for some it shows quite clearly the difference.
 
  • Like
Reactions: Mr-Fox and Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,523
3,589
Mr-Fox there is this template for ComfyUI Hi-Res workflow:

A guy in the comments says this upscale actually fixes the faces, so there's that. Proves me wrong where I was stupidly adamant one needs a dedicated face-restore algo.
 
  • Red Heart
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Where else can we change VAE type, except here?


Can it be done in promt field or at some other place?
Not that I know of. Why? What are you trying to achieve? There might be an extension or something similar.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Mr-Fox there is this template for ComfyUI Hi-Res workflow:

A guy in the comments says this upscale actually fixes the faces, so there's that. Proves me wrong where I was stupidly adamant one needs a dedicated face-restore algo.
I see that comfyui is playing catchup but that it is coming along fast. Perhaps it will be the "other" alternative to webui, once it have the same functions.
Face restoration is useful sometimes, though I find that when using Hires fix it's better to not use it and only use post processing. I use GFPGAN most of the time but codeformer have the option of adjusting it's weight. For some reason it is adjusted the oposite way becaus of course it is... Meaning higher value translates to less weight. Some people just have to make things more awkward if they can help it..
 
Last edited:
  • Haha
Reactions: Sepheyer

Nano999

Member
Jun 4, 2022
152
68
Have anyone faced this error:

modules_alpha[lora_name] = float(value.detach().cpu().numpy())
TypeError: Got unsupported ScalarType BFloat16


It shows right before images generation. No affect to generated result though.
Saw some posts containing this error, just wanted to know if anyone here got it as well and what happened.
 

fr34ky

Active Member
Oct 29, 2017
812
2,167
Any advice to make LORAs look more accurate and better?
For what I've heard and read and practiced:

1) you have to put some attention on every picture, be sure they are good quality, edit them if needed, upscale them to make them more clear
2) choose different angles of the same character, with different background and different lighting
3) start with only LORAs of faces

I couldn't create full body LORAs at the moment, I didn't try much either so that's my experience but I've a couple of good ones with faces and work pretty well, you can get away with 15 or 20 images for face LORAs.
 

devilkkw

Member
Mar 17, 2021
283
965
if you are in automatic1111 you have to edit webui-user.bat and put in:
Code:
set COMMANDLINE_ARGS="--xformers"
 
  • Like
Reactions: Mr-Fox

Sepheyer

Well-Known Member
Dec 21, 2020
1,523
3,589
So, composing is a bitch. There's ControlNet, I am just not there yet. I was trying the old methods of clip-compose and latent-compose.

Either way ComfyUI shines because lets you step into the shoes of SD. One of the approaches is to save intermediate steps so you can observe the impact they have on the finished piece.

Below I generate foreground, background, then combine them in a full-size image and then flash-out that image for the final delivery.

ws.png
You don't have permission to view the spoiler content. Log in or register now.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
So, composing is a bitch. There's ControlNet, I am just not there yet. I was trying the old methods of clip-compose and latent-compose.

Either way ComfyUI shines because lets you step into the shoes of SD. One of the approaches is to save intermediate steps so you can observe the impact they have on the finished piece.

Below I generate foreground, background, then combine them in a full-size image and then flash-out that image for the final delivery.

View attachment 2477533
You don't have permission to view the spoiler content. Log in or register now.
You can save intermediate steps on Webui(auto1111) also. In the settings you can choose when in the process you wish to save and you can save in several places it seems. I have not tried these functions yet. There are many more functions like this. There is an extension that allows you to generate images between the seed numbers, whatever that means. I will post about it after trying it. Also a different one that looks interesting allows you to choose wich part of the prompt is more focused on and can generate different results based on this.
 
Last edited: