[Stable Diffusion] Prompt Sharing and Learning Thread

theMickey_

Engaged Member
Mar 19, 2020
2,091
2,627
And this is my question: how to control upscaler model? In a1111 i have option to chose resize, but i don't find any option in comfy.
There are Custom Nodes that can either rescale or resize your image, and you can also enter any factor by which you want to rescale/resize. Here's just one example:

1704294318040.png
 

devilkkw

Member
Mar 17, 2021
283
965
There are Custom Nodes that can either rescale or resize your image, and you can also enter any factor by which you want to rescale/resize. Here's just one example:

View attachment 3229188
Many thank you, added it. I'm impressed in number of extension CUI have o_O!
Any chance you can post the file with the actual workflow? It is much simplier for me to pop it into my CUI, make changes and post it back.
Yes, i just changend the node upscaler with the one suggested by theMickey_

I zipped it because json upload not allowed.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,523
3,589
Many thank you, added it. I'm impressed in number of extension CUI have o_O!

Yes, i just changend the node upscaler with the one suggested by theMickey_

I zipped it because json upload not allowed.
Two items, may be you know them already:

1) To export workflow as a screenshot - right click on the workspace > "Workflow Image" > "Export" > "PNG".

2) There is an extension that manages all the nodes for you allowing to install nodes you are missing with a mere few clicks. You might already have it, if not, you absolutely have to have it:
 
  • Red Heart
Reactions: devilkkw

Sepheyer

Well-Known Member
Dec 21, 2020
1,523
3,589
Ok guy's, i'm really happy you have made me moving to Comfy, after some experiment
You don't have permission to view the spoiler content. Log in or register now.
The real size image was too big to post here, i reduced it at 50%.
And this is my question: how to control upscaler model? In a1111 i have option to chose resize, but i don't find any option in comfy.

Also this is how is made, hope node is correct, i'm just experimentig.
You don't have permission to view the spoiler content. Log in or register now.
So, another way to upscale is via latent upscale rather than using a pixel upscaler.

Namely you run a sampler off an empty latent, then upscale the latent the sampler produces, and then run that latent again using the prompt and the model. I attached the workflow - you might know this tho.

Why would you do it rather than using pixel upscale. Main reason is to fill in the additional details using the actual model. Another reason is that some LORAs can burn the first output rather badly and you want to re-refine the latent using the original mode. Thus, you run the first sampler with the LORA and then you re-run the same latent, after a small denoise, using pure checkpoint. Or vise versa - you first run the sampler with the pure checkpoint and then only once upscaled you throw in LORA for refinement.

I attached your workflow where I added two additional samplers. Naturally, you want to have preview nodes thrown in to see what each sampler does - cause this way you can safely downsize the number of steps. There are two latent upscalers - regular one and NNLatent. Each has its own quirks. For 80% you want NNLatent, but sometimes the original latent upscaler is the cat's meow. There are three samplers. The first one uses the original chekpoint, the next two samplers use LORA's models.

workflow (4).png
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
A new way to fix hands with controlnet.



ComfyUi Workflow:

ControlNet Aux:

Hand Inpaint Model:


*I have not tried yet.
 
Last edited:

devilkkw

Member
Mar 17, 2021
283
965
So, another way to upscale is via latent upscale rather than using a pixel upscaler.

Namely you run a sampler off an empty latent, then upscale the latent the sampler produces, and then run that latent again using the prompt and the model. I attached the workflow - you might know this tho.

Why would you do it rather than using pixel upscale. Main reason is to fill in the additional details using the actual model. Another reason is that some LORAs can burn the first output rather badly and you want to re-refine the latent using the original mode. Thus, you run the first sampler with the LORA and then you re-run the same latent, after a small denoise, using pure checkpoint. Or vise versa - you first run the sampler with the pure checkpoint and then only once upscaled you throw in LORA for refinement.

I attached your workflow where I added two additional samplers. Naturally, you want to have preview nodes thrown in to see what each sampler does - cause this way you can safely downsize the number of steps. There are two latent upscalers - regular one and NNLatent. Each has its own quirks. For 80% you want NNLatent, but sometimes the original latent upscaler is the cat's meow. There are three samplers. The first one uses the original chekpoint, the next two samplers use LORA's models.

View attachment 3229571
Yes, i was experimenting and just doing some test, added latent upscaler and second pass, seem good to reduce time, but for what i see, is really difficult obtain same result as a1111.
In prompt i've attached, result in CUI are same as your, in a1111 is more close to prompt, because woman have really robotic suite, every render. in cui i get robitic think every 50 test :(
my inexperience of course, experimenting is the key to find what work better.
thank you for suggestion, i will test it soon.
Another really noob question: how load workflow from png?
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,523
3,589
Yes, i was experimenting and just doing some test, added latent upscaler and second pass, seem good to reduce time, but for what i see, is really difficult obtain same result as a1111.
In prompt i've attached, result in CUI are same as your, in a1111 is more close to prompt, because woman have really robotic suite, every render. in cui i get robitic think every 50 test :(
my inexperience of course, experimenting is the key to find what work better.
thank you for suggestion, i will test it soon.
Another really noob question: how load workflow from png?
Click on "Load" in the sidemenu, find the "png" file and open it.
 
  • Like
Reactions: devilkkw

devilkkw

Member
Mar 17, 2021
283
965
Thank. i don't have NNupscaler? is an extension?
Also try svd...wow. impressive:eek:
Take a lot of time, but for now i focus for getting better image, then try vid.

Edit.
Sepheyer Testet your workflow and wow. changed latent upscale and original image size to my default (896x1152) this is result with workflow include.
You don't have permission to view the spoiler content. Log in or register now.
i keep try. really i love how comfy work.
 
Last edited:
  • Red Heart
Reactions: Sepheyer

Delambo

Newbie
Jan 10, 2018
99
84
I'm just getting into starting my first VN, learning Ren'py and I am pretty sure AI art as I'm shit at any art myself. It seems to me that A1111 is not nice to a new user, but that it probably has more power and flexibility but CUI is easier to work with? Before I put any more time into A1111, is there a good reason to consider CUI for my assets?
 

devilkkw

Member
Mar 17, 2021
283
965
I'm just getting into starting my first VN, learning Ren'py and I am pretty sure AI art as I'm shit at any art myself. It seems to me that A1111 is not nice to a new user, but that it probably has more power and flexibility but CUI is easier to work with? Before I put any more time into A1111, is there a good reason to consider CUI for my assets?
I've started today using CUI, i always use a1111 and switch is not so fast, but after some try and understanding how it work, you start like it much as a1111.
Also check guide on 1st page and some post here is really helpful.
If you just start new Visual novel, i suggest try CUI and evaluate witch work's better for your result.
Btw being skilled require time and try. I use a1111 from the beginning and difference on make same result is CUI is oblivius, just keep try.
If your game are not a commission with end time to respect, just take your time and try it.
 
  • Like
Reactions: Mr-Fox and Sepheyer

osanaiko

Engaged Member
Modder
Jul 4, 2017
2,180
3,622
A lot of applications which have highly complex customisable processing pipelines have moved to the visual representation - 3d program shader editors, Unreal Engine development are both examples. The benefits are the complexity can be better shown in a connected diagram fashion.
The actual capabilities are basically the same between Auto1111 and Comfy.
 
  • Thinking Face
Reactions: Mr-Fox

Cakei

Newbie
Aug 30, 2017
78
81
So, I wanted to give Stable Diffusion a try because it made me want to create an Rpg Maker story about succubus with a certain artist Kainkout, it would only be used as a reference to create the story or personal use. (maybe commission if I feel motivated enough)
The point is, this Lora by the same artist:

I have been creating images and the results are good, but I have the doubt if it can be improved in terms of sampling and upscaler and the general options that are used.
Also, is it a good idea to use checkpoint tags for better results?
And another thing, is it worth continuing to use WebUi or get better results with ComfyUI? Also ask about the controlnet implementation.

There is not hurry and this only for curiosity and a surge bcuz, bought rpg maker on steam on sale.
 
  • Like
Reactions: Mr-Fox

osanaiko

Engaged Member
Modder
Jul 4, 2017
2,180
3,622
So, I wanted to give Stable Diffusion a try because it made me want to create an Rpg Maker story about succubus with a certain artist Kainkout, it would only be used as a reference to create the story or personal use. (maybe commission if I feel motivated enough)
The point is, this Lora by the same artist:

I have been creating images and the results are good, but I have the doubt if it can be improved in terms of sampling and upscaler and the general options that are used.
Also, is it a good idea to use checkpoint tags for better results?
And another thing, is it worth continuing to use WebUi or get better results with ComfyUI? Also ask about the controlnet implementation.

There is not hurry and this only for curiosity and a surge bcuz, bought rpg maker on steam on sale.
Regarding your generated images: Post some samples and questions about what specifically you are concerned about

> is it worth continuing to use WebUi or get better results with ComfyUI?
they are both just interfaces to control the same underlying software. comfy is potentially easier to grasp all the setting, but that's much less important than spending time practicing and understanding the tool you choose.

> Also ask about the controlnet implementation.
Controlnet is fantastic and the only thing that makes any sort of consistent artwork feasible. otherwise you end up with great images but with too many variation on the target and never get a stable repeatable character.
 
  • Like
Reactions: Mr-Fox

devilkkw

Member
Mar 17, 2021
283
965
Is these a good way for testing lora effect in CUI?
You don't have permission to view the spoiler content. Log in or register now.
What i mean i see image is affected with lora, but i want to know if is a good way or there are some other method work better.
Also what i want is see only lora effect without any other embedding or lora, just testinting one lora at time.
Another question: i do same test in a1111 and i start getting similar result, except some lora, they gave me totally different style.
For example i've trained different style lora, one of this is a drawing style, in a1111 i get result i want, in CUI no, image result is realistic again and seem do not use lora. Can someone more skilled in cui have idea why some lora work in same way and some no?
 
  • Thinking Face
Reactions: Sepheyer

Delambo

Newbie
Jan 10, 2018
99
84
Using some Lora from civitai to see how well I can turn out consistent characters. The girls seem to work pretty good - they guys not so much, lol.

Here's the parameters from on of them

You don't have permission to view the spoiler content. Log in or register now.

xyz_grid-0006-1010101010.jpg

You don't have permission to view the spoiler content. Log in or register now.

It may just be the male Lora's are not as well trained? Thoughts and suggestions much appreciated.
 
  • Like
Reactions: Sepheyer and Mr-Fox

me3

Member
Dec 31, 2016
316
708
Is these a good way for testing lora effect in CUI?
You don't have permission to view the spoiler content. Log in or register now.
What i mean i see image is affected with lora, but i want to know if is a good way or there are some other method work better.
Also what i want is see only lora effect without any other embedding or lora, just testinting one lora at time.
Another question: i do same test in a1111 and i start getting similar result, except some lora, they gave me totally different style.
For example i've trained different style lora, one of this is a drawing style, in a1111 i get result i want, in CUI no, image result is realistic again and seem do not use lora. Can someone more skilled in cui have idea why some lora work in same way and some no?
On the basis that there's too many differences in what is done in the "path" to each picture, i'm leaning towards "probably not".
Some things might be an oversight and easy to fix, but your seed isn't the same, nor is cfg. While these are likely small differences they can be more than enough to throw off any comparison.
Second issue is that you're technically not running the same prompt, any difference in prompt can have very "altering" affects on output, even if they make no sense, so even if the AI doesn't natively understand your trigger word without the lora, it'll still have an impact. So to know for sure that your lora and/or trigger word is working (and how) you'd need to include that in both images as well. You probably want to do 2 tests, one with and one without the trigger word (for both images). That way you can see how the lora is applied and if the trigger word is actually needed or if it creates a "artistic difference" that might be useful in itself.

For you issue of Loras not working. One thing i've noticed with some lora nodes is that if you make changes to them, like changing strengths, Comfyui seems to not "re run" them but sticks to its cached version. You can spot this if you track execution and you see it's starting at a later stage than where the lora is loaded. For me this was very bad when using lora stacks, hopefully this has been fixed recently, but just follow the execution and you should spot it.
Also, check the console output, might say something about loading errors.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,523
3,589
Exactly. devilkkw, yea, bro, what me3 said.

In general LORAs are extremely intrusive and they irrecoverably change images.

The test for LORA becomes subjective - you run it, and visually try to assess if it does what it is meant to be doing. Literally, you eyes are your only tool here.
 

devilkkw

Member
Mar 17, 2021
283
965
On the basis that there's too many differences in what is done in the "path" to each picture, i'm leaning towards "probably not".
Some things might be an oversight and easy to fix, but your seed isn't the same, nor is cfg. While these are likely small differences they can be more than enough to throw off any comparison.
Second issue is that you're technically not running the same prompt, any difference in prompt can have very "altering" affects on output, even if they make no sense, so even if the AI doesn't natively understand your trigger word without the lora, it'll still have an impact. So to know for sure that your lora and/or trigger word is working (and how) you'd need to include that in both images as well. You probably want to do 2 tests, one with and one without the trigger word (for both images). That way you can see how the lora is applied and if the trigger word is actually needed or if it creates a "artistic difference" that might be useful in itself.

For you issue of Loras not working. One thing i've noticed with some lora nodes is that if you make changes to them, like changing strengths, Comfyui seems to not "re run" them but sticks to its cached version. You can spot this if you track execution and you see it's starting at a later stage than where the lora is loaded. For me this was very bad when using lora stacks, hopefully this has been fixed recently, but just follow the execution and you should spot it.
Also, check the console output, might say something about loading errors.
Exactly. devilkkw, yea, bro, what me3 said.

In general LORAs are extremely intrusive and they irrecoverably change images.

The test for LORA becomes subjective - you run it, and visually try to assess if it does what it is meant to be doing. Literally, you eyes are your only tool here.
Thank you guy's. i'm just trying and have discovered an extension called , it have a node called global seed, useful for test i'm running, it change seed for every sampler and make it the same for all sampler( just put fixed on sampler) and is good for image comparison.
Also installed openpose editor and controlnet, in a1111 these work but is a memory eat so i removed, in cui it work like a charm.
This is a totally game change for me, need experimenting more but i like how the power of image generation is growed up with CUI.

And for who like experimenting, i found this for workflow example. Useful for understanding different workflow ;)