[Stable Diffusion] Prompt Sharing and Learning Thread

osanaiko

Engaged Member
Modder
Jul 4, 2017
2,422
4,278
A lot of applications which have highly complex customisable processing pipelines have moved to the visual representation - 3d program shader editors, Unreal Engine development are both examples. The benefits are the complexity can be better shown in a connected diagram fashion.
The actual capabilities are basically the same between Auto1111 and Comfy.
 
  • Thinking Face
Reactions: Mr-Fox

Cakei

Newbie
Aug 30, 2017
79
83
So, I wanted to give Stable Diffusion a try because it made me want to create an Rpg Maker story about succubus with a certain artist Kainkout, it would only be used as a reference to create the story or personal use. (maybe commission if I feel motivated enough)
The point is, this Lora by the same artist:

I have been creating images and the results are good, but I have the doubt if it can be improved in terms of sampling and upscaler and the general options that are used.
Also, is it a good idea to use checkpoint tags for better results?
And another thing, is it worth continuing to use WebUi or get better results with ComfyUI? Also ask about the controlnet implementation.

There is not hurry and this only for curiosity and a surge bcuz, bought rpg maker on steam on sale.
 
  • Like
Reactions: Mr-Fox

osanaiko

Engaged Member
Modder
Jul 4, 2017
2,422
4,278
So, I wanted to give Stable Diffusion a try because it made me want to create an Rpg Maker story about succubus with a certain artist Kainkout, it would only be used as a reference to create the story or personal use. (maybe commission if I feel motivated enough)
The point is, this Lora by the same artist:

I have been creating images and the results are good, but I have the doubt if it can be improved in terms of sampling and upscaler and the general options that are used.
Also, is it a good idea to use checkpoint tags for better results?
And another thing, is it worth continuing to use WebUi or get better results with ComfyUI? Also ask about the controlnet implementation.

There is not hurry and this only for curiosity and a surge bcuz, bought rpg maker on steam on sale.
Regarding your generated images: Post some samples and questions about what specifically you are concerned about

> is it worth continuing to use WebUi or get better results with ComfyUI?
they are both just interfaces to control the same underlying software. comfy is potentially easier to grasp all the setting, but that's much less important than spending time practicing and understanding the tool you choose.

> Also ask about the controlnet implementation.
Controlnet is fantastic and the only thing that makes any sort of consistent artwork feasible. otherwise you end up with great images but with too many variation on the target and never get a stable repeatable character.
 
  • Like
Reactions: Mr-Fox

devilkkw

Member
Mar 17, 2021
320
1,075
Is these a good way for testing lora effect in CUI?
You don't have permission to view the spoiler content. Log in or register now.
What i mean i see image is affected with lora, but i want to know if is a good way or there are some other method work better.
Also what i want is see only lora effect without any other embedding or lora, just testinting one lora at time.
Another question: i do same test in a1111 and i start getting similar result, except some lora, they gave me totally different style.
For example i've trained different style lora, one of this is a drawing style, in a1111 i get result i want, in CUI no, image result is realistic again and seem do not use lora. Can someone more skilled in cui have idea why some lora work in same way and some no?
 
  • Thinking Face
Reactions: Sepheyer

Delambo

Newbie
Jan 10, 2018
99
87
Using some Lora from civitai to see how well I can turn out consistent characters. The girls seem to work pretty good - they guys not so much, lol.

Here's the parameters from on of them

You don't have permission to view the spoiler content. Log in or register now.

xyz_grid-0006-1010101010.jpg

You don't have permission to view the spoiler content. Log in or register now.

It may just be the male Lora's are not as well trained? Thoughts and suggestions much appreciated.
 
  • Like
Reactions: Sepheyer and Mr-Fox

me3

Member
Dec 31, 2016
316
708
Is these a good way for testing lora effect in CUI?
You don't have permission to view the spoiler content. Log in or register now.
What i mean i see image is affected with lora, but i want to know if is a good way or there are some other method work better.
Also what i want is see only lora effect without any other embedding or lora, just testinting one lora at time.
Another question: i do same test in a1111 and i start getting similar result, except some lora, they gave me totally different style.
For example i've trained different style lora, one of this is a drawing style, in a1111 i get result i want, in CUI no, image result is realistic again and seem do not use lora. Can someone more skilled in cui have idea why some lora work in same way and some no?
On the basis that there's too many differences in what is done in the "path" to each picture, i'm leaning towards "probably not".
Some things might be an oversight and easy to fix, but your seed isn't the same, nor is cfg. While these are likely small differences they can be more than enough to throw off any comparison.
Second issue is that you're technically not running the same prompt, any difference in prompt can have very "altering" affects on output, even if they make no sense, so even if the AI doesn't natively understand your trigger word without the lora, it'll still have an impact. So to know for sure that your lora and/or trigger word is working (and how) you'd need to include that in both images as well. You probably want to do 2 tests, one with and one without the trigger word (for both images). That way you can see how the lora is applied and if the trigger word is actually needed or if it creates a "artistic difference" that might be useful in itself.

For you issue of Loras not working. One thing i've noticed with some lora nodes is that if you make changes to them, like changing strengths, Comfyui seems to not "re run" them but sticks to its cached version. You can spot this if you track execution and you see it's starting at a later stage than where the lora is loaded. For me this was very bad when using lora stacks, hopefully this has been fixed recently, but just follow the execution and you should spot it.
Also, check the console output, might say something about loading errors.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,561
3,703
Exactly. devilkkw, yea, bro, what me3 said.

In general LORAs are extremely intrusive and they irrecoverably change images.

The test for LORA becomes subjective - you run it, and visually try to assess if it does what it is meant to be doing. Literally, you eyes are your only tool here.
 

devilkkw

Member
Mar 17, 2021
320
1,075
On the basis that there's too many differences in what is done in the "path" to each picture, i'm leaning towards "probably not".
Some things might be an oversight and easy to fix, but your seed isn't the same, nor is cfg. While these are likely small differences they can be more than enough to throw off any comparison.
Second issue is that you're technically not running the same prompt, any difference in prompt can have very "altering" affects on output, even if they make no sense, so even if the AI doesn't natively understand your trigger word without the lora, it'll still have an impact. So to know for sure that your lora and/or trigger word is working (and how) you'd need to include that in both images as well. You probably want to do 2 tests, one with and one without the trigger word (for both images). That way you can see how the lora is applied and if the trigger word is actually needed or if it creates a "artistic difference" that might be useful in itself.

For you issue of Loras not working. One thing i've noticed with some lora nodes is that if you make changes to them, like changing strengths, Comfyui seems to not "re run" them but sticks to its cached version. You can spot this if you track execution and you see it's starting at a later stage than where the lora is loaded. For me this was very bad when using lora stacks, hopefully this has been fixed recently, but just follow the execution and you should spot it.
Also, check the console output, might say something about loading errors.
Exactly. devilkkw, yea, bro, what me3 said.

In general LORAs are extremely intrusive and they irrecoverably change images.

The test for LORA becomes subjective - you run it, and visually try to assess if it does what it is meant to be doing. Literally, you eyes are your only tool here.
Thank you guy's. i'm just trying and have discovered an extension called , it have a node called global seed, useful for test i'm running, it change seed for every sampler and make it the same for all sampler( just put fixed on sampler) and is good for image comparison.
Also installed openpose editor and controlnet, in a1111 these work but is a memory eat so i removed, in cui it work like a charm.
This is a totally game change for me, need experimenting more but i like how the power of image generation is growed up with CUI.

And for who like experimenting, i found this for workflow example. Useful for understanding different workflow ;)
 

devilkkw

Member
Mar 17, 2021
320
1,075
I love "install missing custom node function". CUI is literally superior on managing update and extension.
Also finally i've added wildcards tath work like a1111 and i'm really happy for it.
But... i need to know if there are a function to call some data from node:
for example i have a latent image node with 896w and 1152h, is posibble to recall the for example in upscale, and apply multiplier?
Like (latent image w)*2?
I know there are a latent upscale do it, but for test i'm doing a function like this is good, change value manually is pretty boring.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,561
3,703
I love "install missing custom node function". CUI is literally superior on managing update and extension.
Also finally i've added wildcards tath work like a1111 and i'm really happy for it.
But... i need to know if there are a function to call some data from node:
for example i have a latent image node with 896w and 1152h, is posibble to recall the for example in upscale, and apply multiplier?
Like (latent image w)*2?
I know there are a latent upscale do it, but for test i'm doing a function like this is good, change value manually is pretty boring.
There are math and utility nodes, such as "get size" or somesuch. That's how you grab the dimensions.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,799
Seph forgot to mention, it has different styles: Cartoon Style, Semi-realistic, Hard Style, Soft fantasy, Base model .
Pick one and download. The basemodel has trigger words: "Comics style", "Comics", "Cartoon style".
All images were generated with the same prompt and seed, Euler a, 30steps and cgf scale 6.

Cartoon StyleSemi-RealisticHard StyleSoft fantasy
00073-2684256637.png 00076-2684256637.png 00078-2684256637.png 00081-2684256637.png
Base model "Comics style"Base model "Comics"Base model "Cartoon style"Base model "Comics style, Comics, Cartoon style,"
00083-2684256637.png 00088-2684256637.png 00091-2684256637.png 00092-2684256637.png

And bonus.. (hard style)
00009-2684256637.png

And extra bonus..
00016-2684256637.png
 
Last edited:

hkennereth

Member
Mar 3, 2019
236
767
A new way to fix hands with controlnet.



ComfyUi Workflow:

ControlNet Aux:

Hand Inpaint Model:


*I have not tried yet.
Took me a little while to get it working due to a bug that was preventing it from working on Windows, but... yeah, it works. Like most things in Stable Diffusion it doesn't reliably work 100% of the time, but when it does work it is pretty transformative.

1704629711031.png 1704629728919.png
(I also had my Face Detailer set after the first image, which is why the face also changes)

I found the workflow in the video pretty confusing, but this is pretty much all you need to have in your flow (the red part is just for preview of the masks and controlnet images being generated, but it's not required to make it work):
1704629862197.png

All in all, I approve, it's now part of my standard workflow. Thanks for the tip!
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,799
Another alternative for fixing hands. This was shared by the great Synalon so all credit goes to him for finding this video tutorial.
It's a process very similar to inpainting but it is done with the free image editor krita and ai live painting. Since it's done in an editor you have much more control with the masking process than the rather primitive and simplistic masking you can do with A1111 inpainting. Combining this with ai igeneration looks like it could be a very useful alternative to inpainting with SD and/or after detailer.

Fixing hands with krita.png
Fixing hands with krita2.png


*I have not tried this yet either, it looks promising though.
 
Last edited:

modine2021

Member
May 20, 2021
415
1,388
dont know what's going on. networks quit showing up. jsut that spinning icon. i do have that prompts-all-in-one extension to make up for it. but i still want the normal tabs to work. im annoy they got rid of the red "show extra networks button". and change it to this. nothing but issues since :cautious:
 
  • Like
Reactions: Sepheyer

devilkkw

Member
Mar 17, 2021
320
1,075
stop run a1111,
go to your a1111 folder, see if there are a file called user.css.
if yes, rename it like "old_user.css"
clear browser cache,
restart a1111
check if icon is on.
 

modine2021

Member
May 20, 2021
415
1,388
stop run a1111,
go to your a1111 folder, see if there are a file called user.css.
if yes, rename it like "old_user.css"
clear browser cache,
restart a1111
check if icon is on.
ther was no file by that name. clean out cache though. still same issue. cant figure it out..might go back to 1.6 cause 1.7 gvie nothing but headache since updating.
 
Last edited: