[Stable Diffusion] Prompt Sharing and Learning Thread

Sharinel

Active Member
Dec 23, 2018
598
2,509
Hi guys! Total noob here when it comes to AI generated images, but I'm trying to get somehow started. I've read the OP and a lot of guides, also searched this thread for various topics and questions I had and already found some very useful guides as well as some tips and tricks. But I just can't scroll through 2.500+ posts to find all the information I need, so please bare with me if I'm going to ask a couple of stupid questions, which might have already been answered already and I just missed to find the right posts....

So, I downloaded and installed ComfyUI and kinda get it working on my main Windows PC: I've also downloaded some checkpoints, and was able to use those different checkpoints to get different results for the same prompts with the "default" ComfyUI workflow. Then I followed some guide about SDXL and I think I got it working as well. But I still have some questions, and if anybody could help me with that (or just link me to the some posts which already have covered those questions), I'll be more than grateful!

1) Using multiple PCs over network: when it comes to GPUs, my main PC has a 2080Ti, and I have a 2nd PC with a 1080. Is there a way to include the 2nd PC's GPU when using ComfyUI, is it worth the hassle to set it up and how would I do that?

2) Using the "default" SDXL setup/workflow -- is there a way to include other models/checkpoints/Loras I've downloaded from Civiatai, and how would I do that?

3) Are there any "must have" ComfyUI custom nodes I need to add to be able to setup a decent workflow?

4) What are your personal "must have" negative prompts to avoid those horrible deformed bodies with missing/additional limbs? Are these prompts checkpoint specific or do you have some "default" prompts you always use no matter what checkpoints you use?

5) I've seen a lot of "grid" pictures from you guys, and I was wondering how to create those with ComfyUI to then to select a couple of pictures from that grid to upscale/improve only selected ones? What's your workflow on how to accomplish this, or is this something that's not possible with ComfyUI and I just misunderstood how "grids" works?

6) I've read about being able to include all your ComfyUI setting into a saved PNG, but so far I couldn't figure out how to do it. Is there any guide on how to write that information into a PNG (or into a separate file that corresponds to a specific PNG) so I can go back to a specific PNG I've save and just mess with it?

Sorry again if those are some very basic questions, so please feel free to ignore this post (or me in general), but any reply is with useful links or tips and tricks is highly appreciated! Thanks a lot in advance!
I don't use comfyui myself but I do have it installed, so I can answer a couple.

2) Download the files you have into the correct folders, which are under comfyui/models

3) I think the comfyui manager is separate from comfyui itself, and if you install that you can then use it to download any missing nodes from the workflows



5) Grids I think is an A1111 option rather than a comfyui thing, certainly I've not found it on my brief forays into comfyui - maybe it's a setting I've missed?

6) I think they are included in the metadata of your png file? I know that if you drag a png file made with comfyui onto your window it will show the workflow, which is a damn clever system. Then you just use the manager to grab any custom nodes you are missing
 

me3

Member
Dec 31, 2016
316
708
After figuring out several things to make training fairly fast on lowspec systems i thought i'd stick with the horrible idea of doing very slow and badly suited things. So i did some "quick" tests and goes at doing "animated" generations...and it's s definitely to "very slow", so far anyway, hoping i can figure out some way to improve on it.
This is just from a "start to finish" trial run so a bunch of stuff that should have been tweaked, but main goal was more to get things set up and produce a working result. The low framerate is intentional, hope was it would look slightly more "natural" as a facial movement, debatable how it worked out. Should have specified a single color for clothing, just one more thing to keep in mind for next time...
Hopefully things don't break too much in posting and in the compression.

View attachment kendra.webp
(Edit note: webp showed as working image in the editor when writing, but not when actually posted :( )
 
  • Red Heart
  • Like
Reactions: Mr-Fox and Sepheyer

sharlotte

Member
Jan 10, 2019
291
1,552
Ok, I'm not a pro on comfyUI (nor A1111 for that matter ;( ) but there are a lot of videos out there that I would recommend going through as they offer advice and workflows. One of the guys I find useful is .
As for your questions: 1, I cannot answer as i've never tried.
There are nodes that you can install to get LORAs in, more advanced workflows to generate your picture. I include below a picture which contains a workflow with a single LORA. Just drag it or load it from comfy to get that specific flow.

To use a specific LORA or model, just go to the node where they should sit and click on the name, then it will automatically present you with the other LORA and models you may have downloaded and saved. btw, I use a different location than comfy for models, lora, controlnet... as i can then use these for both comfy and SD.

1701327041524.png

ComfyUIManager is the one you want to have at first. It helps you manage all the nodes and flows. This is what the menu looks like.
1701326894084.png

It allows you to directly install from comfyUI using the 'install' functions (middle column above). Really great tool.
As for what is needed, it all depends on what you want to do. Here is a sample of what is installed on my comfyUI (which I had not used in a while :
sdxl_prompt_styler
\FreeU_Advanced
\stability-ComfyUI-nodes
\ComfyUI_IPAdapter_plus
\ComfyUI_NestedNodeBuilder
\IPAdapter-ComfyUI
\ComfyLiterals
\ComfyUI-Custom-Scripts
\ComfyUI_UltimateSDUpscale
\custom_nodes\efficiency-nodes-comfyui
\ComfyUI-Inspire-Pack
\Derfuu_ComfyUI_ModdedNodes
\comfyui_controlnet_aux
\ComfyUI_Comfyroll_CustomNodes
\facedetailer
\comfyui-reactor-node
\ComfyUI-Manager
\comfy_mtb
\was-node-suite-comfyui
\ComfyUI-Impact-Pack
ComfyUI_00188_.png

Didn't really take the time to figure out the blood ;) looks like a plastic leat but at least you'll get the flow from the picture ;)
 

Nano999

Member
Jun 4, 2022
165
73
Do you guys have any idea what checkpoint and lora was used here?
 

VanMortis

Newbie
Jun 6, 2020
44
632
1) Using multiple PCs over network: when it comes to GPUs, my main PC has a 2080Ti, and I have a 2nd PC with a 1080. Is there a way to include the 2nd PC's GPU when using ComfyUI, is it worth the hassle to set it up and how would I do that?
1) You can try out Stability Swarm and see if you get it working, . Scott Detweiler has a tutorial on it . And as sharlotte pointed out he's tutorials are a wery good startingpoint if you want to learn more about ComfyUI and what custom nodes that are useful and what they do.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
When I started doing image-2-image I came across the need to use ComfyUI's nodes such as sharpening and image transformations. Which in turn required understanding how these guys work. Some of you already know, but for me it was a rather new concept that the ComfyUI workplace is a great testbed to determine which upscale / sharpening parameters work best for your particular image.

So, yeah, maybe this will save you a quick minute:
_sharpening_test__00021_.png
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
After figuring out several things to make training fairly fast on lowspec systems i thought i'd stick with the horrible idea of doing very slow and badly suited things. So i did some "quick" tests and goes at doing "animated" generations...and it's s definitely to "very slow", so far anyway, hoping i can figure out some way to improve on it.
This is just from a "start to finish" trial run so a bunch of stuff that should have been tweaked, but main goal was more to get things set up and produce a working result. The low framerate is intentional, hope was it would look slightly more "natural" as a facial movement, debatable how it worked out. Should have specified a single color for clothing, just one more thing to keep in mind for next time...
Hopefully things don't break too much in posting and in the compression.

View attachment 3128810
(Edit note: webp showed as working image in the editor when writing, but not when actually posted :( )
I'm happy to see someone using my lora. It's a bit overtrained so my tip is to not use more than 0.8, I find that the sweetspot is 0.3-0.6. I use about 0.4 most of the time.
It was trained with clipskip 2, so it gets more activated and works better with clipskip 2. Trigger words: " kendra",
" headband" and in particular "black headband". There might be more trigger words. Blackheadband was completely by accident, since most of the source images has a black headband in it, it became a trigger word.
 
Last edited:
  • Like
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
Hi guys! Total noob here when it comes to AI generated images, but I'm trying to get somehow started. I've read the OP and a lot of guides, also searched this thread for various topics and questions I had and already found some very useful guides as well as some tips and tricks. But I just can't scroll through 2.500+ posts to find all the information I need, so please bare with me if I'm going to ask a couple of stupid questions, which might have already been answered already and I just missed to find the right posts....

So, I downloaded and installed ComfyUI and kinda get it working on my main Windows PC: I've also downloaded some checkpoints, and was able to use those different checkpoints to get different results for the same prompts with the "default" ComfyUI workflow. Then I followed some guide about SDXL and I think I got it working as well. But I still have some questions, and if anybody could help me with that (or just link me to the some posts which already have covered those questions), I'll be more than grateful!

1) Using multiple PCs over network: when it comes to GPUs, my main PC has a 2080Ti, and I have a 2nd PC with a 1080. Is there a way to include the 2nd PC's GPU when using ComfyUI, is it worth the hassle to set it up and how would I do that?

2) Using the "default" SDXL setup/workflow -- is there a way to include other models/checkpoints/Loras I've downloaded from Civiatai, and how would I do that?

3) Are there any "must have" ComfyUI custom nodes I need to add to be able to setup a decent workflow?

4) What are your personal "must have" negative prompts to avoid those horrible deformed bodies with missing/additional limbs? Are these prompts checkpoint specific or do you have some "default" prompts you always use no matter what checkpoints you use?

5) I've seen a lot of "grid" pictures from you guys, and I was wondering how to create those with ComfyUI to then to select a couple of pictures from that grid to upscale/improve only selected ones? What's your workflow on how to accomplish this, or is this something that's not possible with ComfyUI and I just misunderstood how "grids" works?

6) I've read about being able to include all your ComfyUI setting into a saved PNG, but so far I couldn't figure out how to do it. Is there any guide on how to write that information into a PNG (or into a separate file that corresponds to a specific PNG) so I can go back to a specific PNG I've save and just mess with it?

Sorry again if those are some very basic questions, so please feel free to ignore this post (or me in general), but any reply is with useful links or tips and tricks is highly appreciated! Thanks a lot in advance!
I was certain the grids can't be done in CUI even when I got up today, but merely a quick second ago I stubled onto CUI grid workaround while searching for inpaint workflows on Civitai:
 

me3

Member
Dec 31, 2016
316
708
Debatable how well it worked but for a simple, "dump one image into a simple setup" it isn't that bad. Hopefully more digging into this can provide something that allows you to have more "control"
View attachment ComfyUI_00180_.webp

Initial image stolen from this post, so all credit to sharlotte for that

Edit: Adding a 30% size gif as preview, webp should be better quality and size, and it contains the workflow
180-resize.gif
 
Last edited:

me3

Member
Dec 31, 2016
316
708
I thought i'd convert this one to gif to make it easier to post and view, but with a 6mb filesize i doubt i'd be able to attach it, so webp it is.
(if only admins would add support for it *cough* :sneaky:)

View attachment ComfyUI_00181_.webp

So far the AI seems to be somewhat good at selecting elements and "actions", granted it's just 2 images, but same setups made one person walk forward and in this it animated the flames.

Edit: Adding a 30% size gif as preview, webp should be better quality and size, and it contains the workflow
181-resize.gif
 
Last edited:

theMickey_

Engaged Member
Mar 19, 2020
2,193
2,824
Jimwalrus, Sharinel, sharlotte, VanMortis, Sepheyer and anyone who I missed -- first of all: thank you guys for replying to my noob-questions, I really very much appreciate it! It took me a while to go through all your answers, and while they did in fact answer some of my questions, they also added so much more details I didn't know about that I've realized I was only looking at the top of that deep rabbit hole called ComfyUI -- and I'm lovin' it! :)

In the meantime I watched all of Scott Detweiler's videos about ComfyUI (great videos btw, so thanks again for the suggestions), I installed the ComfyUI-Manager (didn't know about that) and started to build my own workflows from scratch! It's way less intimidating now that I (kinda) know what to do in general. I dragged a couple of PNG files onto my workspace and was able to look at those workflows (another thing I didn't know was possible, and I must say that's the most impressive thing I've ever seen!), which helps a lot to understand how some of those pictures have been created. I will also have to look at some of those custom nodes and which are the ones I would really need to achieve certain things, because I'm usually trying to keep it as simple as I can and avoid installing everything and getting overwhelmed with all the possibilities.

So far I haven't tried to get this network thing going I was asking about, that will have to wait until another time, because it seems to be complicated and I'll probably need to read more about it before I start trying to set it up. And I'm still trying to figure out how to incorporate multiple checkpoints and Loras into a single image, but I'm getting there I think...

So thanks again to you for taking the time to answer my questions, I'll now have to spent more time to experiment with ComfyUI and will continue reading all your suggestions.

Cheers, much love!
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
Jimwalrus, Sharinel, sharlotte, VanMortis, Sepheyer and anyone who I missed -- first of all: thank you guys for replying to my noob-questions, I really very much appreciate it! It took me a while to go through all your answers, and while they did in fact answer some of my questions, they also added so much more details I didn't know about that I've realized I was only looking at the top of that deep rabbit hole called ComfyUI -- and I'm lovin' it! :)

In the meantime I watched all of Scott Detweiler's videos about ComfyUI (great videos btw, so thanks again for the suggestions), I installed the ComfyUI-Manager (didn't know about that) and started to build my own workflows from scratch! It's way less intimidating now that I (kinda) know what to do in general. I dragged a couple of PNG files onto my workspace and was able to look at those workflows (another thing I didn't know was possible, and I must say that's the most impressive thing I've ever seen!), which helps a lot to understand how some of those pictures have been created. I will also have to look at some of those custom nodes and which are the ones I would really need to achieve certain things, because I'm usually trying to keep it as simple as I can and avoid installing everything and getting overwhelmed with all the possibilities.

So far I haven't tried to get this network thing going I was asking about, that will have to wait until another time, because it seems to be complicated and I'll probably need to read more about it before I start trying to set it up. And I'm still trying to figure out how to incorporate multiple checkpoints and Loras into a single image, but I'm getting there I think...

So thanks again to you for taking the time to answer my questions, I'll now have to spent more time to experiment with ComfyUI and will continue reading all your suggestions.

Cheers, much love!
One thing is: stay with simple, your own workflows for as long as you can. Once you get bored, you will naturally seek ways to add complexity in the ways you need it. Right away diving into heavy workflow is really a no go. There is time it takes for your brain to realize that what CUI does is it actually juggles latents.

At some point in the future, you'll have an "aha": oh, that's what he meant by "juggles latents"! And at that point you'll have super easy time with workflows regardless of their complexity. Until then, a sincere recommendation to keep them simple.
 

sharlotte

Member
Jan 10, 2019
291
1,552
theMickey_ it's only been 11 months since Sepheyer started this thread and it's been an incredible journey of discovery for most of us. It still is as SD and CUI are increasingly better and offer better tools. I let it go over the summer (in Northern hemisphere, if you can ever have a 'summer' in Ireland) and coming slowly back to it now it's just incredible what's available. Enjoy and feel free to share your discoveries here.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
ComfyUI Tip - Grouping Nodes

Whoa. So:

- Hold control to select multiple nods.
- Right click and chose "Convert to group node".

This will create one super-node that combines all the nodes you had selected. Great for decluttering.
 
  • Like
Reactions: VanMortis

me3

Member
Dec 31, 2016
316
708
When you have bad ideas but since you've gotten this far you might as well finish.
At the time of writing i've passed generating image 520 in the "series" and still not half way, and i just noticed that i've been generating 1288x720 instead of 1280x720 :(
Restarting isn't exactly an option so one more fuckup to the list...
Anyway, idea was to test making longer clips at larger image size and at a decent framerate, without using the animating features that exist. So pushing sd1.5 to generate larger non upscaled/highres.fixed images and keeping every finger crossed...
This is a small preview cut down in size, length and compressed to fit for a gif to post. I'll do a more detailed rewrite up once this thing finishes or breaks completely.
kd-optimize.gif

As a side note, you can "easily" queue up over 1000 prompts in comfyui, not sure it's gonna be needed for all that much but i guess that's one more thing tested for a limit :p
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
So, I finally stole lotsa other ppls workflows (especially this ) to finally have a decent image-to-image with outpaint.

Great, show this to ur mom.

Hold up, here is the use case: you can generate images with model A in 1:1.5 aspect ratio and bring into another model with upscale into 1:1.

And the reason you use model A is cause it has poses or body proportions that the other model can't give you - see side by side below.
You don't have permission to view the spoiler content. Log in or register now.
illustration.png