CREATE YOUR AI CUM SLUT ON CANDY.AI TRY FOR FREE
x

[Stable Diffusion] Prompt Sharing and Learning Thread

Halmes

Newbie
May 22, 2017
18
6
Based off of the error message your trying to use CPU training and need to

The drop down your showing is non EMA base checkpoint so in that case I would just use what ever model is closest to the style your going for.

With an image count that high you might want to lower the LR to 1e-6 for the U-NET and 5e-7 for the TE
GPU connection cannot be established due to Colab usage
says this. So i have to pay to use gpu more.
I changed google account but now it says " An NVIDIA GPU may be present on this machine, but a CUDA-enabled jaxlib is not installed. Falling back to cpu. "
 
Last edited:

DrPepper808

Newbie
Dec 7, 2021
85
47
I just started playing with this, its been a lot of fun. I have three questions.
1) lets say I have a prompt in txt2img and I create 25 images, and one is Perfect the other 24 not so much. is there a way to take that one image and train SD to do a better job with the same prompt?
2) I used a Lora to lock in the face, but also need to lock in the cloths so I can try my hand at making a game. Is there a way to lock those in>
3) if I want a image without a background for ez import to renpy is there a prompt for that?
 

devilkkw

Member
Mar 17, 2021
327
1,113
I just started playing with this, its been a lot of fun. I have three questions.
1) lets say I have a prompt in txt2img and I create 25 images, and one is Perfect the other 24 not so much. is there a way to take that one image and train SD to do a better job with the same prompt?
2) I used a Lora to lock in the face, but also need to lock in the cloths so I can try my hand at making a game. Is there a way to lock those in>
3) if I want a image without a background for ez import to renpy is there a prompt for that?
What UI are you using?
 

devilkkw

Member
Mar 17, 2021
327
1,113
i've used a1111 for many time, but i've switched to CUI.
So some possible answer:
1) you can train Lora with result image, but result is not sure. Lora train is not so simple, understanding every value for training, and all settings is not so simple in the begins, but you can try and see. Experimenting is the way. For train lora i suggest to use
2) in the same way for face, you can use lora for dress.
3) not prompt, there are some extensions doing it, but i checked it only on CUI, i can't point to right extensions for a1111 because i never checked on it.

A1111 is good and i had a fun for many time, but CUI is more powerfull for me, that node UI permit some workflow that is so difficult to remake in a1111. But this is my opinion, there are many skilled people's and everyone work good on UI they use.
 
  • Like
Reactions: Jimwalrus

felldude

Active Member
Aug 26, 2017
572
1,701
I trained a lora with AdamW8 Bit on Pony using Civitai, my 2048 trainings on base Pony kept failing so I used my custom model to train, unfortunately that is limited to 1024x1024 a size I can do locally but eh I'd rather not tie up my machine for 12 hours vs 10 minutes for SD 1.5

Without chubby lora

ComfyUI_01009_.png

With Chubby Lora


ComfyUI_01002_.png
 
  • Like
Reactions: devilkkw

DrPepper808

Newbie
Dec 7, 2021
85
47
i've used a1111 for many time, but i've switched to CUI.
So some possible answer:
1) you can train Lora with result image, but result is not sure. Lora train is not so simple, understanding every value for training, and all settings is not so simple in the begins, but you can try and see. Experimenting is the way. For train lora i suggest to use
2) in the same way for face, you can use lora for dress.
3) not prompt, there are some extensions doing it, but i checked it only on CUI, i can't point to right extensions for a1111 because i never checked on it.

A1111 is good and i had a fun for many time, but CUI is more powerfull for me, that node UI permit some workflow that is so difficult to remake in a1111. But this is my opinion, there are many skilled people's and everyone work good on UI they use.
when you say CUI do you mean ComfyUI? I just started looking into ComfyUI, seems like what I was looking for.
 

devilkkw

Member
Mar 17, 2021
327
1,113
Yes, CUI mean Comfyui.
Actually I'm not a PC, but if you start using it I suggest to add ComfyUi Manager. You find link for downloading by search it on Google.
If you like it, and want to try it I will share my workflow for removing background when I'm at my PC.
 
  • Like
Reactions: DrPepper808

DrPepper808

Newbie
Dec 7, 2021
85
47
Yes, CUI mean Comfyui.
Actually I'm not a PC, but if you start using it I suggest to add ComfyUi Manager. You find link for downloading by search it on Google.
If you like it, and want to try it I will share my workflow for removing background when I'm at my PC.
I watched 3 or 4 vids last night on it, its amazing, and a little overwhelming. LOL
 

felldude

Active Member
Aug 26, 2017
572
1,701
I watched 3 or 4 vids last night on it, its amazing, and a little overwhelming. LOL
It can be overwhelming when watching people with master crafted workflows, but the default workflow with one or two lora's is simple to learn.

I only recently got Xformer's working for Auto1111 without it my 1024x1024 XL generations took 5 minutes vs 20 seconds.
That was the only reason I switched to Comfy but now I only use Auto for text to 3D or some of the other features.

Auto1111 will likely have longer support and more modules built for it as huggingface made Gradio the building block for every WebUI they use, but I would find it hard to go back to Auto1111 now.
 

devilkkw

Member
Mar 17, 2021
327
1,113
I watched 3 or 4 vids last night on it, its amazing, and a little overwhelming. LOL
This is a simple workflow i use for remove background:
kkw-RemBg.png

And this is the result(also workflow inside):
kkw-alphaTest-_00001_.png

Just drag your image in the image loader and queque prompt.



It can be overwhelming when watching people with master crafted workflows, but the default workflow with one or two lora's is simple to learn.

I only recently got Xformer's working for Auto1111 without it my 1024x1024 XL generations took 5 minutes vs 20 seconds.
That was the only reason I switched to Comfy but now I only use Auto for text to 3D or some of the other features.

Auto1111 will likely have longer support and more modules built for it as huggingface made Gradio the building block for every WebUI they use, but I would find it hard to go back to Auto1111 now.
For what i've see Forge seem have better memory management than a1111. I totally switched to CUI because what you can do with it is good, like experimenting multi sampler in one image, apply lora at different time, etc.
But i keep my favorite a1111 version for testing new lora or model i make, if i share it i want to check in both UI.
 

felldude

Active Member
Aug 26, 2017
572
1,701
For what i've see Forge seem have better memory management than a1111. I totally switched to CUI because what you can do with it is good, like experimenting multi sampler in one image, apply lora at different time, etc.
But i keep my favorite a1111 version for testing new lora or model i make, if i share it i want to check in both UI.
I have not used forge, keeping Auto1111, Comfy and Koyha all in different VENV's is taking up enough space with 98% the same files and 2% difference that makes them incompatible.

You have the #coder in your signature, have you gotten deepspeed to work with windows?
The precompiled one fails for me and when I compiled it with ninja it corrupted my cuda files. I tried multiple CUDA sdk's and fixed the reference to the Linux time .h
 
Last edited:

Sharinel

Active Member
Dec 23, 2018
608
2,555
I have not used forge, keeping Auto1111, Comfy and Koyha all in different VENV's is taking up enough space with 98% the same files and 2% difference that makes them incompatible.
I'm using StableSwarmUI at the moment as it has SD3 compatibility, might swap over from Forge. It's a auto1111-type UI built on top of Comfy
 

felldude

Active Member
Aug 26, 2017
572
1,701
I'm using StableSwarmUI at the moment as it has SD3 compatibility, might swap over from Forge. It's a auto1111-type UI built on top of Comfy
I'm using a custom build of comfy that I built with tensor flow RT .dll's other then a round off error I have no issue

Screenshot 2024-06-12 125008.jpg

I'm not sure it is actually speeding things up because no one post thier IT/s or secs per IT...lol

For 1024x1024 I am at 1.0-1.5 IT/s with most samplers (Not Huen)
For 2048x2048 I am at 4.5 secs per IT

This is native render not High res fix or SEGS that breaks the image down.
Oh and my motherboard is only PCI 3.0 so I am at half bandwidth but I'm not sure it matters.
2,560 GPU cores on a RTX 3050
 

Sharinel

Active Member
Dec 23, 2018
608
2,555
I'm using a custom build of comfy that I built with tensor flow RT .dll's other then a round off error I have no issue

View attachment 3729886

I'm not sure it is actually speeding things up because no one post thier IT/s or secs per IT...lol

For 1024x1024 I am at 1.0-1.5 IT/s with most samplers (Not Huen)
For 2048x2048 I am at 4.5 secs per IT

This is native render not High res fix or SEGS that breaks the image down.
Oh and my motherboard is only PCI 3.0 so I am at half bandwidth but I'm not sure it matters.
2,560 GPU cores on a RTX 3050
Doesn't tell me any of that on Stableswarm i'm afraid, this is for 1024x1024
1718212529999.png
That's about the closest
And on the Stableswarm page it shows on the right how long it took
1718212610379.png
 

DrPepper808

Newbie
Dec 7, 2021
85
47
This is a simple workflow i use for remove background:
View attachment 3729751

And this is the result(also workflow inside):
View attachment 3729750

Just drag your image in the image loader and queque prompt.





For what i've see Forge seem have better memory management than a1111. I totally switched to CUI because what you can do with it is good, like experimenting multi sampler in one image, apply lora at different time, etc.
But i keep my favorite a1111 version for testing new lora or model i make, if i share it i want to check in both UI.
the good news is I have decades of workflow experience, so the CUI GUI very similar to other tools I have used. the 200k of different options is the hard part :p
 

felldude

Active Member
Aug 26, 2017
572
1,701
Doesn't tell me any of that on Stableswarm i'm afraid, this is for 1024x1024
View attachment 3729914
That's about the closest
And on the Stableswarm page it shows on the right how long it took
View attachment 3729916
For me its:
25-30 seconds on average per image at 1024x1024 (assuming 20 steps) with the VAE encode time
Around 2 minute to native render a 2048x2048

It looks like your using a batch size of 4 (I only use batch size of 4 or 8 on SD as its out of range of my 8GB card for XL)

And if I did the math right your at 115 seconds per image.
That might be the batch size slowing down generation?
 

Sharinel

Active Member
Dec 23, 2018
608
2,555
For me its:
25-30 seconds on average per image at 1024x1024 (assuming 20 steps) with the VAE encode time
Around 2 minute to native render a 2048x2048

It looks like your using a batch size of 4 (I only use batch size of 4 or 8 on SD as its out of range of my 8GB card for XL)

And if I did the math right your at 115 seconds per image.
That might be the batch size slowing down generation?
Your maths might be a bit off, it's showing 17 secs or so for each generation? 18:10:06 it starts and last one kicks off at 18:10:58 . I have a 4090 so it's certainly a lot faster than 115 secs