[Stable Diffusion] Prompt Sharing and Learning Thread

Jimwalrus

Active Member
Sep 15, 2021
874
3,236
Or create both (from perhaps subtly different training data sets) and try the 'TI plus LoRA' approach. You're likely to get even better results.
Or you could get some random cocker spaniel / octopus hybrid, this is Stable Diffusion after all!
 
  • Like
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
I have read the glossary but my head is bad :censored: so here are my dumb questions say I have about 100-1000 images of a model(woman) should I train lora or embedding with that? And If my input pictures of women has similar feature, eg big tits, small hips, can lora, or embedding understand the general idea/concept I want to get to, I want to create realistic image?
We kinda have two LORA training threads. One's listed in the very original topic that started this thread. The author takes you through making a LORA in 1-2-3 steps. Trust me, you do want to follow those just to wrap your head around some concepts.

Then we have this community thread of training a LORA: https://f95zone.to/threads/loras-for-wildeers-lara-croft-development-thread.173873/
 

Synalon

Member
Jan 31, 2022
202
626
Would anyone have a good SDXL ComfyUI setup?

The fucking thing is laughing at me.

View attachment 3125118
Here's a basic workflow I put together it has SDXL and a Refiner, I'm not sure I added the detailer correctly and I still have no idea how to add half of the other things like Controlnet etc.

To clarify things for those that don't use comfy much, as far as I can tell you need a slightly different workflow for SD1.5 compared to SDXL, this is the SDXL workflow.
 
Last edited:

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
So, in ComfyUI I suddenly started get the nightmarish renders. Like brrr FML WTF bro.

Turns out one of the negative prompts connected as a positives into the sampler. So, yea, look at your negative prompts - that's what I was rendering for a short second. Brrr.

Here some therapy for me.

a_03963_.png
 
Last edited:

me3

Member
Dec 31, 2016
316
708
So, now you can do SDXL in real time:



Watch for just 10 seconds.

Called Turbo SDXL, yet being rolled out.
They've been doing "live" renderings of webcam input and LCM, don't have a clip at hand, just came across one on a LCM info page the other day.
There's a delay when you see it side by side but still a pretty good start.

Edit: Found the page, it's almost at the bottom of
 
Last edited:
  • Like
Reactions: Mr-Fox

sharlotte

Member
Jan 10, 2019
268
1,432
Just found out that there is a new thing called Deep Shrink ( ) also known as Kohya Hires Fix (A1111 - search for sd-webui-kohya-hiresfix in the extensions), which allows you to produce hires pictures without needing to use the HiRes Fix. Much faster. Loads of articles in reddit on it, here is . It also prevents the double head or monstrous bodies generated when using large width/height (I still got weird ones, but far less and no double heads).
There's a by Nerdy Rodent which also covers this option.
Just generated in SDXL and it looks good:
00009-16112847.png

Looks like this in SD, just make sure you enable the extension:
1701280594237.png
I did not change the settings that are enabled by default otherwise.
00012-3613568577.png 00017-246436635.png 00017-246436635.png 00019-2949865314.png
 

me3

Member
Dec 31, 2016
316
708
Just found out that there is a new thing called Deep Shrink ( ) also known as Kohya Hires Fix (A1111 - search for sd-webui-kohya-hiresfix in the extensions), which allows you to produce hires pictures without needing to use the HiRes Fix. Much faster. Loads of articles in reddit on it, here is . It also prevents the double head or monstrous bodies generated when using large width/height (I still got weird ones, but far less and no double heads).
There's a by Nerdy Rodent which also covers this option.
Just generated in SDXL and it looks good:
View attachment 3127775

Looks like this in SD, just make sure you enable the extension:
View attachment 3127776
I did not change the settings that are enabled by default otherwise.
View attachment 3127819 View attachment 3127824 View attachment 3127824 View attachment 3127835
I don't know what this does so sorry if it's already mentioned in the linked stuff.
When using SDXL in comfyui you can use the target width/height (i think that's that they are generally called) in the sdxl clip text encode to deal with "copies". You shouldn't just blindly put this as a insane number as it can have odd effects, but if you are getting duplicates of ppl or things like 1,5 bodies stacked in very tall images, you can increase the relevant value to fix it.

Not sure if there's something similar in a1111 as i can't use SDXL there
 

theMickey_

Engaged Member
Mar 19, 2020
2,112
2,650
Hi guys! Total noob here when it comes to AI generated images, but I'm trying to get somehow started. I've read the OP and a lot of guides, also searched this thread for various topics and questions I had and already found some very useful guides as well as some tips and tricks. But I just can't scroll through 2.500+ posts to find all the information I need, so please bare with me if I'm going to ask a couple of stupid questions, which might have already been answered already and I just missed to find the right posts....

So, I downloaded and installed ComfyUI and kinda get it working on my main Windows PC: I've also downloaded some checkpoints, and was able to use those different checkpoints to get different results for the same prompts with the "default" ComfyUI workflow. Then I followed some guide about SDXL and I think I got it working as well. But I still have some questions, and if anybody could help me with that (or just link me to the some posts which already have covered those questions), I'll be more than grateful!

1) Using multiple PCs over network: when it comes to GPUs, my main PC has a 2080Ti, and I have a 2nd PC with a 1080. Is there a way to include the 2nd PC's GPU when using ComfyUI, is it worth the hassle to set it up and how would I do that?

2) Using the "default" SDXL setup/workflow -- is there a way to include other models/checkpoints/Loras I've downloaded from Civiatai, and how would I do that?

3) Are there any "must have" ComfyUI custom nodes I need to add to be able to setup a decent workflow?

4) What are your personal "must have" negative prompts to avoid those horrible deformed bodies with missing/additional limbs? Are these prompts checkpoint specific or do you have some "default" prompts you always use no matter what checkpoints you use?

5) I've seen a lot of "grid" pictures from you guys, and I was wondering how to create those with ComfyUI to then to select a couple of pictures from that grid to upscale/improve only selected ones? What's your workflow on how to accomplish this, or is this something that's not possible with ComfyUI and I just misunderstood how "grids" works?

6) I've read about being able to include all your ComfyUI setting into a saved PNG, but so far I couldn't figure out how to do it. Is there any guide on how to write that information into a PNG (or into a separate file that corresponds to a specific PNG) so I can go back to a specific PNG I've save and just mess with it?

Sorry again if those are some very basic questions, so please feel free to ignore this post (or me in general), but any reply is with useful links or tips and tricks is highly appreciated! Thanks a lot in advance!
 

Jimwalrus

Active Member
Sep 15, 2021
874
3,236
Hi guys! Total noob here when it comes to AI generated images, but I'm trying to get somehow started. I've read the OP and a lot of guides, also searched this thread for various topics and questions I had and already found some very useful guides as well as some tips and tricks. But I just can't scroll through 2.500+ posts to find all the information I need, so please bare with me if I'm going to ask a couple of stupid questions, which might have already been answered already and I just missed to find the right posts....

So, I downloaded and installed ComfyUI and kinda get it working on my main Windows PC: I've also downloaded some checkpoints, and was able to use those different checkpoints to get different results for the same prompts with the "default" ComfyUI workflow. Then I followed some guide about SDXL and I think I got it working as well. But I still have some questions, and if anybody could help me with that (or just link me to the some posts which already have covered those questions), I'll be more than grateful!

1) Using multiple PCs over network: when it comes to GPUs, my main PC has a 2080Ti, and I have a 2nd PC with a 1080. Is there a way to include the 2nd PC's GPU when using ComfyUI, is it worth the hassle to set it up and how would I do that?

2) Using the "default" SDXL setup/workflow -- is there a way to include other models/checkpoints/Loras I've downloaded from Civiatai, and how would I do that?

3) Are there any "must have" ComfyUI custom nodes I need to add to be able to setup a decent workflow?

4) What are your personal "must have" negative prompts to avoid those horrible deformed bodies with missing/additional limbs? Are these prompts checkpoint specific or do you have some "default" prompts you always use no matter what checkpoints you use?

5) I've seen a lot of "grid" pictures from you guys, and I was wondering how to create those with ComfyUI to then to select a couple of pictures from that grid to upscale/improve only selected ones? What's your workflow on how to accomplish this, or is this something that's not possible with ComfyUI and I just misunderstood how "grids" works?

6) I've read about being able to include all your ComfyUI setting into a saved PNG, but so far I couldn't figure out how to do it. Is there any guide on how to write that information into a PNG (or into a separate file that corresponds to a specific PNG) so I can go back to a specific PNG I've save and just mess with it?

Sorry again if those are some very basic questions, so please feel free to ignore this post (or me in general), but any reply is with useful links or tips and tricks is highly appreciated! Thanks a lot in advance!
Not being a ComfyUI user (or SDXL - can't get the bastard to work!) I can really only answer #4
I really like to use negative embeddings:
For photorealistic I pretty much always use & (the latter from our very own devilkkw).
The former also works pretty well in cartoon/anime checkpoints. I believe devilkkw has also done an extreme negative for these, but haven't tried it yet.
For hands, I use for cartoons and for photorealistic.
 

Sharinel

Member
Dec 23, 2018
499
2,072
Hi guys! Total noob here when it comes to AI generated images, but I'm trying to get somehow started. I've read the OP and a lot of guides, also searched this thread for various topics and questions I had and already found some very useful guides as well as some tips and tricks. But I just can't scroll through 2.500+ posts to find all the information I need, so please bare with me if I'm going to ask a couple of stupid questions, which might have already been answered already and I just missed to find the right posts....

So, I downloaded and installed ComfyUI and kinda get it working on my main Windows PC: I've also downloaded some checkpoints, and was able to use those different checkpoints to get different results for the same prompts with the "default" ComfyUI workflow. Then I followed some guide about SDXL and I think I got it working as well. But I still have some questions, and if anybody could help me with that (or just link me to the some posts which already have covered those questions), I'll be more than grateful!

1) Using multiple PCs over network: when it comes to GPUs, my main PC has a 2080Ti, and I have a 2nd PC with a 1080. Is there a way to include the 2nd PC's GPU when using ComfyUI, is it worth the hassle to set it up and how would I do that?

2) Using the "default" SDXL setup/workflow -- is there a way to include other models/checkpoints/Loras I've downloaded from Civiatai, and how would I do that?

3) Are there any "must have" ComfyUI custom nodes I need to add to be able to setup a decent workflow?

4) What are your personal "must have" negative prompts to avoid those horrible deformed bodies with missing/additional limbs? Are these prompts checkpoint specific or do you have some "default" prompts you always use no matter what checkpoints you use?

5) I've seen a lot of "grid" pictures from you guys, and I was wondering how to create those with ComfyUI to then to select a couple of pictures from that grid to upscale/improve only selected ones? What's your workflow on how to accomplish this, or is this something that's not possible with ComfyUI and I just misunderstood how "grids" works?

6) I've read about being able to include all your ComfyUI setting into a saved PNG, but so far I couldn't figure out how to do it. Is there any guide on how to write that information into a PNG (or into a separate file that corresponds to a specific PNG) so I can go back to a specific PNG I've save and just mess with it?

Sorry again if those are some very basic questions, so please feel free to ignore this post (or me in general), but any reply is with useful links or tips and tricks is highly appreciated! Thanks a lot in advance!
I don't use comfyui myself but I do have it installed, so I can answer a couple.

2) Download the files you have into the correct folders, which are under comfyui/models

3) I think the comfyui manager is separate from comfyui itself, and if you install that you can then use it to download any missing nodes from the workflows



5) Grids I think is an A1111 option rather than a comfyui thing, certainly I've not found it on my brief forays into comfyui - maybe it's a setting I've missed?

6) I think they are included in the metadata of your png file? I know that if you drag a png file made with comfyui onto your window it will show the workflow, which is a damn clever system. Then you just use the manager to grab any custom nodes you are missing
 

me3

Member
Dec 31, 2016
316
708
After figuring out several things to make training fairly fast on lowspec systems i thought i'd stick with the horrible idea of doing very slow and badly suited things. So i did some "quick" tests and goes at doing "animated" generations...and it's s definitely to "very slow", so far anyway, hoping i can figure out some way to improve on it.
This is just from a "start to finish" trial run so a bunch of stuff that should have been tweaked, but main goal was more to get things set up and produce a working result. The low framerate is intentional, hope was it would look slightly more "natural" as a facial movement, debatable how it worked out. Should have specified a single color for clothing, just one more thing to keep in mind for next time...
Hopefully things don't break too much in posting and in the compression.

View attachment kendra.webp
(Edit note: webp showed as working image in the editor when writing, but not when actually posted :( )
 
  • Red Heart
  • Like
Reactions: Mr-Fox and Sepheyer

sharlotte

Member
Jan 10, 2019
268
1,432
Ok, I'm not a pro on comfyUI (nor A1111 for that matter ;( ) but there are a lot of videos out there that I would recommend going through as they offer advice and workflows. One of the guys I find useful is .
As for your questions: 1, I cannot answer as i've never tried.
There are nodes that you can install to get LORAs in, more advanced workflows to generate your picture. I include below a picture which contains a workflow with a single LORA. Just drag it or load it from comfy to get that specific flow.

To use a specific LORA or model, just go to the node where they should sit and click on the name, then it will automatically present you with the other LORA and models you may have downloaded and saved. btw, I use a different location than comfy for models, lora, controlnet... as i can then use these for both comfy and SD.

1701327041524.png

ComfyUIManager is the one you want to have at first. It helps you manage all the nodes and flows. This is what the menu looks like.
1701326894084.png

It allows you to directly install from comfyUI using the 'install' functions (middle column above). Really great tool.
As for what is needed, it all depends on what you want to do. Here is a sample of what is installed on my comfyUI (which I had not used in a while :
sdxl_prompt_styler
\FreeU_Advanced
\stability-ComfyUI-nodes
\ComfyUI_IPAdapter_plus
\ComfyUI_NestedNodeBuilder
\IPAdapter-ComfyUI
\ComfyLiterals
\ComfyUI-Custom-Scripts
\ComfyUI_UltimateSDUpscale
\custom_nodes\efficiency-nodes-comfyui
\ComfyUI-Inspire-Pack
\Derfuu_ComfyUI_ModdedNodes
\comfyui_controlnet_aux
\ComfyUI_Comfyroll_CustomNodes
\facedetailer
\comfyui-reactor-node
\ComfyUI-Manager
\comfy_mtb
\was-node-suite-comfyui
\ComfyUI-Impact-Pack
ComfyUI_00188_.png

Didn't really take the time to figure out the blood ;) looks like a plastic leat but at least you'll get the flow from the picture ;)
 

Nano999

Member
Jun 4, 2022
153
68
Do you guys have any idea what checkpoint and lora was used here?
 

VanMortis

Newbie
Jun 6, 2020
44
630
1) Using multiple PCs over network: when it comes to GPUs, my main PC has a 2080Ti, and I have a 2nd PC with a 1080. Is there a way to include the 2nd PC's GPU when using ComfyUI, is it worth the hassle to set it up and how would I do that?
1) You can try out Stability Swarm and see if you get it working, . Scott Detweiler has a tutorial on it . And as sharlotte pointed out he's tutorials are a wery good startingpoint if you want to learn more about ComfyUI and what custom nodes that are useful and what they do.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
When I started doing image-2-image I came across the need to use ComfyUI's nodes such as sharpening and image transformations. Which in turn required understanding how these guys work. Some of you already know, but for me it was a rather new concept that the ComfyUI workplace is a great testbed to determine which upscale / sharpening parameters work best for your particular image.

So, yeah, maybe this will save you a quick minute:
_sharpening_test__00021_.png
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
After figuring out several things to make training fairly fast on lowspec systems i thought i'd stick with the horrible idea of doing very slow and badly suited things. So i did some "quick" tests and goes at doing "animated" generations...and it's s definitely to "very slow", so far anyway, hoping i can figure out some way to improve on it.
This is just from a "start to finish" trial run so a bunch of stuff that should have been tweaked, but main goal was more to get things set up and produce a working result. The low framerate is intentional, hope was it would look slightly more "natural" as a facial movement, debatable how it worked out. Should have specified a single color for clothing, just one more thing to keep in mind for next time...
Hopefully things don't break too much in posting and in the compression.

View attachment 3128810
(Edit note: webp showed as working image in the editor when writing, but not when actually posted :( )
I'm happy to see someone using my lora. It's a bit overtrained so my tip is to not use more than 0.8, I find that the sweetspot is 0.3-0.6. I use about 0.4 most of the time.
It was trained with clipskip 2, so it gets more activated and works better with clipskip 2. Trigger words: " kendra",
" headband" and in particular "black headband". There might be more trigger words. Blackheadband was completely by accident, since most of the source images has a black headband in it, it became a trigger word.
 
Last edited:
  • Like
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,526
3,596
Hi guys! Total noob here when it comes to AI generated images, but I'm trying to get somehow started. I've read the OP and a lot of guides, also searched this thread for various topics and questions I had and already found some very useful guides as well as some tips and tricks. But I just can't scroll through 2.500+ posts to find all the information I need, so please bare with me if I'm going to ask a couple of stupid questions, which might have already been answered already and I just missed to find the right posts....

So, I downloaded and installed ComfyUI and kinda get it working on my main Windows PC: I've also downloaded some checkpoints, and was able to use those different checkpoints to get different results for the same prompts with the "default" ComfyUI workflow. Then I followed some guide about SDXL and I think I got it working as well. But I still have some questions, and if anybody could help me with that (or just link me to the some posts which already have covered those questions), I'll be more than grateful!

1) Using multiple PCs over network: when it comes to GPUs, my main PC has a 2080Ti, and I have a 2nd PC with a 1080. Is there a way to include the 2nd PC's GPU when using ComfyUI, is it worth the hassle to set it up and how would I do that?

2) Using the "default" SDXL setup/workflow -- is there a way to include other models/checkpoints/Loras I've downloaded from Civiatai, and how would I do that?

3) Are there any "must have" ComfyUI custom nodes I need to add to be able to setup a decent workflow?

4) What are your personal "must have" negative prompts to avoid those horrible deformed bodies with missing/additional limbs? Are these prompts checkpoint specific or do you have some "default" prompts you always use no matter what checkpoints you use?

5) I've seen a lot of "grid" pictures from you guys, and I was wondering how to create those with ComfyUI to then to select a couple of pictures from that grid to upscale/improve only selected ones? What's your workflow on how to accomplish this, or is this something that's not possible with ComfyUI and I just misunderstood how "grids" works?

6) I've read about being able to include all your ComfyUI setting into a saved PNG, but so far I couldn't figure out how to do it. Is there any guide on how to write that information into a PNG (or into a separate file that corresponds to a specific PNG) so I can go back to a specific PNG I've save and just mess with it?

Sorry again if those are some very basic questions, so please feel free to ignore this post (or me in general), but any reply is with useful links or tips and tricks is highly appreciated! Thanks a lot in advance!
I was certain the grids can't be done in CUI even when I got up today, but merely a quick second ago I stubled onto CUI grid workaround while searching for inpaint workflows on Civitai: