- Sep 15, 2021
- 885
- 3,268
We kinda have two LORA training threads. One's listed in the very original topic that started this thread. The author takes you through making a LORA in 1-2-3 steps. Trust me, you do want to follow those just to wrap your head around some concepts.I have read the glossary but my head is badso here are my dumb questions say I have about 100-1000 images of a model(woman) should I train lora or embedding with that? And If my input pictures of women has similar feature, eg big tits, small hips, can lora, or embedding understand the general idea/concept I want to get to, I want to create realistic image?
Here's a basic workflow I put together it has SDXL and a Refiner, I'm not sure I added the detailer correctly and I still have no idea how to add half of the other things like Controlnet etc.Would anyone have a good SDXL ComfyUI setup?
The fucking thing is laughing at me.
View attachment 3125118
They've been doing "live" renderings of webcam input and LCM, don't have a clip at hand, just came across one on a LCM info page the other day.So, now you can do SDXL in real time:
You must be registered to see the links
Watch for just 10 seconds.
Called Turbo SDXL, yet being rolled out.
I don't know what this does so sorry if it's already mentioned in the linked stuff.Just found out that there is a new thing called Deep Shrink (You must be registered to see the links) also known as Kohya Hires Fix (A1111 - search for sd-webui-kohya-hiresfix in the extensions), which allows you to produce hires pictures without needing to use the HiRes Fix. Much faster. Loads of articles in reddit on it, here isYou must be registered to see the links. It also prevents the double head or monstrous bodies generated when using large width/height (I still got weird ones, but far less and no double heads).
There's aYou must be registered to see the linksby Nerdy Rodent which also covers this option.
Just generated in SDXL and it looks good:
View attachment 3127775
Looks like this in SD, just make sure you enable the extension:
View attachment 3127776
I did not change the settings that are enabled by default otherwise.
View attachment 3127819 View attachment 3127824 View attachment 3127824 View attachment 3127835
Not being a ComfyUI user (or SDXL - can't get the bastard to work!) I can really only answer #4Hi guys! Total noob here when it comes to AI generated images, but I'm trying to get somehow started. I've read the OP and a lot of guides, also searched this thread for various topics and questions I had and already found some very useful guides as well as some tips and tricks. But I just can't scroll through 2.500+ posts to find all the information I need, so please bare with me if I'm going to ask a couple of stupid questions, which might have already been answered already and I just missed to find the right posts....
So, I downloaded and installed ComfyUI and kinda get it working on my main Windows PC: I've also downloaded some checkpoints, and was able to use those different checkpoints to get different results for the same prompts with the "default" ComfyUI workflow. Then I followed some guide about SDXL and I think I got it working as well. But I still have some questions, and if anybody could help me with that (or just link me to the some posts which already have covered those questions), I'll be more than grateful!
1) Using multiple PCs over network: when it comes to GPUs, my main PC has a 2080Ti, and I have a 2nd PC with a 1080. Is there a way to include the 2nd PC's GPU when using ComfyUI, is it worth the hassle to set it up and how would I do that?
2) Using the "default" SDXL setup/workflow -- is there a way to include other models/checkpoints/Loras I've downloaded from Civiatai, and how would I do that?
3) Are there any "must have" ComfyUI custom nodes I need to add to be able to setup a decent workflow?
4) What are your personal "must have" negative prompts to avoid those horrible deformed bodies with missing/additional limbs? Are these prompts checkpoint specific or do you have some "default" prompts you always use no matter what checkpoints you use?
5) I've seen a lot of "grid" pictures from you guys, and I was wondering how to create those with ComfyUI to then to select a couple of pictures from that grid to upscale/improve only selected ones? What's your workflow on how to accomplish this, or is this something that's not possible with ComfyUI and I just misunderstood how "grids" works?
6) I've read about being able to include all your ComfyUI setting into a saved PNG, but so far I couldn't figure out how to do it. Is there any guide on how to write that information into a PNG (or into a separate file that corresponds to a specific PNG) so I can go back to a specific PNG I've save and just mess with it?
Sorry again if those are some very basic questions, so please feel free to ignore this post (or me in general), but any reply is with useful links or tips and tricks is highly appreciated! Thanks a lot in advance!
I don't use comfyui myself but I do have it installed, so I can answer a couple.Hi guys! Total noob here when it comes to AI generated images, but I'm trying to get somehow started. I've read the OP and a lot of guides, also searched this thread for various topics and questions I had and already found some very useful guides as well as some tips and tricks. But I just can't scroll through 2.500+ posts to find all the information I need, so please bare with me if I'm going to ask a couple of stupid questions, which might have already been answered already and I just missed to find the right posts....
So, I downloaded and installed ComfyUI and kinda get it working on my main Windows PC: I've also downloaded some checkpoints, and was able to use those different checkpoints to get different results for the same prompts with the "default" ComfyUI workflow. Then I followed some guide about SDXL and I think I got it working as well. But I still have some questions, and if anybody could help me with that (or just link me to the some posts which already have covered those questions), I'll be more than grateful!
1) Using multiple PCs over network: when it comes to GPUs, my main PC has a 2080Ti, and I have a 2nd PC with a 1080. Is there a way to include the 2nd PC's GPU when using ComfyUI, is it worth the hassle to set it up and how would I do that?
2) Using the "default" SDXL setup/workflow -- is there a way to include other models/checkpoints/Loras I've downloaded from Civiatai, and how would I do that?
3) Are there any "must have" ComfyUI custom nodes I need to add to be able to setup a decent workflow?
4) What are your personal "must have" negative prompts to avoid those horrible deformed bodies with missing/additional limbs? Are these prompts checkpoint specific or do you have some "default" prompts you always use no matter what checkpoints you use?
5) I've seen a lot of "grid" pictures from you guys, and I was wondering how to create those with ComfyUI to then to select a couple of pictures from that grid to upscale/improve only selected ones? What's your workflow on how to accomplish this, or is this something that's not possible with ComfyUI and I just misunderstood how "grids" works?
6) I've read about being able to include all your ComfyUI setting into a saved PNG, but so far I couldn't figure out how to do it. Is there any guide on how to write that information into a PNG (or into a separate file that corresponds to a specific PNG) so I can go back to a specific PNG I've save and just mess with it?
Sorry again if those are some very basic questions, so please feel free to ignore this post (or me in general), but any reply is with useful links or tips and tricks is highly appreciated! Thanks a lot in advance!
1) You can try out Stability Swarm and see if you get it working,1) Using multiple PCs over network: when it comes to GPUs, my main PC has a 2080Ti, and I have a 2nd PC with a 1080. Is there a way to include the 2nd PC's GPU when using ComfyUI, is it worth the hassle to set it up and how would I do that?
I'm happy to see someone using my lora. It's a bit overtrained so my tip is to not use more than 0.8, I find that the sweetspot is 0.3-0.6. I use about 0.4 most of the time.After figuring out several things to make training fairly fast on lowspec systems i thought i'd stick with the horrible idea of doing very slow and badly suited things. So i did some "quick" tests and goes at doing "animated" generations...and it's s definitely to "very slow", so far anyway, hoping i can figure out some way to improve on it.
This is just from a "start to finish" trial run so a bunch of stuff that should have been tweaked, but main goal was more to get things set up and produce a working result. The low framerate is intentional, hope was it would look slightly more "natural" as a facial movement, debatable how it worked out. Should have specified a single color for clothing, just one more thing to keep in mind for next time...
Hopefully things don't break too much in posting and in the compression.
View attachment 3128810
(Edit note: webp showed as working image in the editor when writing, but not when actually posted)
I was certain the grids can't be done in CUI even when I got up today, but merely a quick second ago I stubled onto CUI grid workaround while searching for inpaint workflows on Civitai:Hi guys! Total noob here when it comes to AI generated images, but I'm trying to get somehow started. I've read the OP and a lot of guides, also searched this thread for various topics and questions I had and already found some very useful guides as well as some tips and tricks. But I just can't scroll through 2.500+ posts to find all the information I need, so please bare with me if I'm going to ask a couple of stupid questions, which might have already been answered already and I just missed to find the right posts....
So, I downloaded and installed ComfyUI and kinda get it working on my main Windows PC: I've also downloaded some checkpoints, and was able to use those different checkpoints to get different results for the same prompts with the "default" ComfyUI workflow. Then I followed some guide about SDXL and I think I got it working as well. But I still have some questions, and if anybody could help me with that (or just link me to the some posts which already have covered those questions), I'll be more than grateful!
1) Using multiple PCs over network: when it comes to GPUs, my main PC has a 2080Ti, and I have a 2nd PC with a 1080. Is there a way to include the 2nd PC's GPU when using ComfyUI, is it worth the hassle to set it up and how would I do that?
2) Using the "default" SDXL setup/workflow -- is there a way to include other models/checkpoints/Loras I've downloaded from Civiatai, and how would I do that?
3) Are there any "must have" ComfyUI custom nodes I need to add to be able to setup a decent workflow?
4) What are your personal "must have" negative prompts to avoid those horrible deformed bodies with missing/additional limbs? Are these prompts checkpoint specific or do you have some "default" prompts you always use no matter what checkpoints you use?
5) I've seen a lot of "grid" pictures from you guys, and I was wondering how to create those with ComfyUI to then to select a couple of pictures from that grid to upscale/improve only selected ones? What's your workflow on how to accomplish this, or is this something that's not possible with ComfyUI and I just misunderstood how "grids" works?
6) I've read about being able to include all your ComfyUI setting into a saved PNG, but so far I couldn't figure out how to do it. Is there any guide on how to write that information into a PNG (or into a separate file that corresponds to a specific PNG) so I can go back to a specific PNG I've save and just mess with it?
Sorry again if those are some very basic questions, so please feel free to ignore this post (or me in general), but any reply is with useful links or tips and tricks is highly appreciated! Thanks a lot in advance!