[Stable Diffusion] Prompt Sharing and Learning Thread

devilkkw

Member
Mar 17, 2021
327
1,113
is there a better way to use loras in comfyui? I'm used to place a lora loader right next to the base model and link it to every sampler I use. Anyone tested other archtectures, like placing it only on a second sampler that finishes the image?
If you download this:
A fully automated workflow for "image 2 text 2 image" or image 2 image"
View attachment 3610374

This workflow load image, if there's a prompt use it for make new image, if not use a blip model for give prompt.
I've made it simply as possible, you have only to selct some switch for generation.
It also have ability to load lora, and swap something in prompt.

image 2 text 2 image semple:
View attachment 3610395
changing subject View attachment 3610388

Image 2 Image sample (subject change):
foxy
View attachment 3610392
dog View attachment 3610393
cat View attachment 3610394
tom cruise
View attachment 3610391
emma watson
View attachment 3610390

All image have workflow included.
Hope you like and experimenting with it.
you see there's a "clip set layer" node connected after lora, i set it at -24 and seem lora work better.
Just use it to see how connect it and do some test, like fixing seed and change value. Also most important is value of lora, weight and clip do difference.

Also on multiple sampler have you try using "reencode vae" then reapply lora in next sampler? There are some many tricks CUI allow you to test, maybe sharing a simple workflow and let us modify it to check is a good idea, because everyone have their skill and with those tool everyday we have to learn.
 

hkennereth

Member
Mar 3, 2019
239
784
is there a better way to use loras in comfyui? I'm used to place a lora loader right next to the base model and link it to every sampler I use. Anyone tested other archtectures, like placing it only on a second sampler that finishes the image?
That's my usual process. In my workflow I run the prompt first without any loras on a SDXL checkpoint, which gives me more original composition and poses, and then using a combination of ControlNet and img2img I run a second SD1.5 checkpoint with a character lora to get the likeness of the person I'm making images of. Works great.
 

Nano999

Member
Jun 4, 2022
173
73
How to turn 1.5 into SDXL or Pony? You just download the model and load it as a base model and that's it?
And if you want to use 1.5 just load any other model, right
Because when I try to use SDXL models they just generate ugly nonsense
1715777733677.png
 
Last edited:

Sharinel

Active Member
Dec 23, 2018
611
2,566
How to turn 1.5 into SDXL or Pony? You just download the model and load it as a base model and that's it?
And if you want to use 1.5 just load any other model, right
Because when I try to use SDXL models they just generate ugly nonsense
View attachment 3636655
Check your VAE, that is normally the cause of the SDXL nonsense generations. If you are using Auto1111/Forge, set it to automatic. Not sure what you do in Comfy but I'm sure you just need to add another 20 flowcharts and some spaghetti linking them and you'll be fine
VAE.jpg
 

Nano999

Member
Jun 4, 2022
173
73
flowcharts?
spaghetti?

I'm trying to use this lora, but nothing works:


It never generates good quality and super slow, like 1 image 10 minutes

1715783186097.png
1715783191537.png

1715782994679.png
1715783023805.png
1715785589592.png
Original:
1715785465038.png

My result:
1715785730167.png
1715785748877.png
1715785756279.png
 

EvylEve

Newbie
Apr 5, 2021
31
57
flowcharts?
spaghetti?

I'm trying to use this lora, but nothing works:


It never generates good quality and super slow, like 1 image 10 minutes

View attachment 3637020
View attachment 3637022

View attachment 3637010
View attachment 3637013
View attachment 3637116
Original:
View attachment 3637112

My result:
View attachment 3637122
View attachment 3637124
View attachment 3637125
mmm, weird, have you tried increasing dimension of your target image ? Default on A1111 is 512x512, which is perfect for SD 1.5, but SDXL requires something more.

One more tip: don't bother with refiner initially, hires fix is way better (tamper with settings to enable model swap for it).

Remove Pony's specific tokens (if you're not using a pony based model).

However with your exact same prompt, using base SDXL model (but the one with VAE fix, you can get it ) and no refiner, just increasing the size to 768x768 I get something like:

grid-0088.png

Increasing to 1024x1024:


grid-0089.jpg


And if I switch to a pony model (ponyRealism v11):



grid-0090.png
 
Last edited:

spikor

New Member
Dec 11, 2021
13
102
I'm a noob with SD, but I had the same issue, switch to 1024x1024 images and it will work. SDXL is trained on that resolution and apparently breaks with 512x512.
On the other hand, SD1.5 works way better with 512x512 for the same reason.
 

Sharinel

Active Member
Dec 23, 2018
611
2,566
flowcharts?
spaghetti?

I'm trying to use this lora, but nothing works:


It never generates good quality and super slow, like 1 image 10 minutes

View attachment 3637020
View attachment 3637022

View attachment 3637010
View attachment 3637013
View attachment 3637116
Original:
View attachment 3637112

My result:
View attachment 3637122
View attachment 3637124
View attachment 3637125
Right, lets go through this

1. You are running a Pony prompt and trying to use a Pony Lora, but using the base SDXL. Doesn't work. It's like trying to put diesel into a petrol vehicle. Both do the same thing, slightly differently, and they don't play nice with each other. ( How do I know it's a Pony prompt? All that score_9 stuff at the start of the prompt)

2. Go back to civitai, download the following checkpoint -
This is the base Pony model.

3. As EvylEve said above, swap to 1024*1024 and get rid of the 'enable refiner' tick box. For the SD VAE section, change it to automatic (most new checkpoints have the VAE included, so use their one by default with the automatic setting). Don't bother with the hires.fix just yet

4. If it still doesn't work, take screenshots again and we'll (try to) see what's going on
 
Last edited:
  • Like
Reactions: EvylEve

felldude

Active Member
Aug 26, 2017
572
1,701
Two for one Special today.

(Trained with Pony Images for workflow to PONYXL)

ComfyUI_01139_.png ComfyUI_01079_.png



ComfyUI_00632_.png ComfyUI_00597_.png
 
  • Like
Reactions: VanMortis

felldude

Active Member
Aug 26, 2017
572
1,701
This is my 10th 2k refinement of pony.
(Not native I wish, I can native tune 1.5 Models but it takes days)

As the result of multiple 2k trainings I now have a model that can produce this image by
1280x1280 text to image, 1280x1280-1600x1600 image to image at .65, then 1600x1600-2048x2048 at .5

The image is 100% AI and not some real image run at .01

ComfyUI_00700_.png

It does well with concepts it was not directly trained in also

ComfyUI_00715_.png
 
Last edited:

XenoXe

New Member
Sep 8, 2021
10
4
This is my 10th 2k refinement of pony.
(Not native I wish, I can native tune 1.5 Models but it takes days)

As the result of multiple 2k trainings I now have a model that can produce this image by
1280x1280 text to image, 1280x1280-1600x1600 image to image at .65, then 1600x1600-2048x2048 at .5

The image is 100% AI and not some real image run at .01

View attachment 3659095

It does well with concepts it was not directly trained in also

View attachment 3659173
Bro the quality are insane how can i learn to make quality lora like these i've been learning loras and stuffs for like a year now and pretty decent at it but i've never come across a quality this good. Bruh Teach ME!
 
  • Like
Reactions: felldude

Ludu

New Member
Jun 7, 2021
13
3
A quick question, what would I do to have the same character in the same pose but wearing different outfits? Tried to do it through image2image inpainting but it didn't seem to work.
 

felldude

Active Member
Aug 26, 2017
572
1,701
Bro the quality are insane how can i learn to make quality lora like these i've been learning loras and stuffs for like a year now and pretty decent at it but i've never come across a quality this good. Bruh Teach ME!
The dataset is key. Most of my SD Loras use 100 or so images now. 1 or 2 "bad" images may not reinforce bad learning in a set of that size but 10% will.

For example I took 100 random images from the old HQ-Faces data set and substituted 10 of them with the face of the Victoria 3D model. Rather then the 90 outweighing the 10...presumably because the 10 where all similar faces with different angles and expressions it biased the lora. (The clip was marked for 3D for those 10, and realistic for the 90)

My understanding is to native fine tune with Adam on even a 1.5 checkpoint you need 32MB of VRAM.
I can native fine tune with Ada or Lion on 1.5 but I have never had a refinement that looked good.
Juggernaut, Realisticvision, Epicrealism are all clearly based off of the same refinement. (I don't know who did the first)

With XL I am using LORA merges as only the big boys with their 8 A1000's can native fine tune.
(With Adam you could probably do lion or Ada with a 32GB card)

Even with the Lora's I have around 16GB of training data that I play around with the weighting and rank until I get a model that doesn't have catastrophic loss (PONY already has major loss)
 

XenoXe

New Member
Sep 8, 2021
10
4
The dataset is key. Most of my SD Loras use 100 or so images now. 1 or 2 "bad" images may not reinforce bad learning in a set of that size but 10% will.

For example I took 100 random images from the old HQ-Faces data set and substituted 10 of them with the face of the Victoria 3D model. Rather then the 90 outweighing the 10...presumably because the 10 where all similar faces with different angles and expressions it biased the lora. (The clip was marked for 3D for those 10, and realistic for the 90)

My understanding is to native fine tune with Adam on even a 1.5 checkpoint you need 32MB of VRAM.
I can native fine tune with Ada or Lion on 1.5 but I have never had a refinement that looked good.
Juggernaut, Realisticvision, Epicrealism are all clearly based off of the same refinement. (I don't know who did the first)

With XL I am using LORA merges as only the big boys with their 8 A1000's can native fine tune.
(With Adam you could probably do lion or Ada with a 32GB card)

Even with the Lora's I have around 16GB of training data that I play around with the weighting and rank until I get a model that doesn't have catastrophic loss (PONY already has major loss)
I appriciate that you're helping me out here. And also i have a few question in my thoughts. I'm not fully understanding the Process of native Finetuning. And all i've ever done was to train a LORA from the ground up by using the custom data with the base model. And also the Process of Lora Merging intrigue me caz i've never seen a useful guide nor understand how it work. But i know that in the cases like your's it's like a top tier lora. I've tested it and found there it no style bleeding nor artifacting in the image and worked well with most Pony based Checkpoints and on top of that the quality is insane.I want to learn this GODLY technique from you.

And also i've been focusing mainly on the pony XL model for now and i know it have some issues with the training unlike the SD1.5. From my understanding of finetuning there are only like 2 option in koya trainer the Dreambooth Finetuning and Finetuning Using LORA. But i've never tested those. And I have a plan to training an anime series with a maximum of 6 character for with all the style and stuffs but training it on just LORA will definitely overfit in one way or another. So i wanted to hear some advices from you that would suggest me a better solution.
 
  • Like
Reactions: felldude

felldude

Active Member
Aug 26, 2017
572
1,701
I appriciate that you're helping me out here. And also i have a few question in my thoughts. I'm not fully understanding the Process of native Finetuning. And all i've ever done was to train a LORA from the ground up by using the custom data with the base model. And also the Process of Lora Merging intrigue me caz i've never seen a useful guide nor understand how it work. But i know that in the cases like your's it's like a top tier lora. I've tested it and found there it no style bleeding nor artifacting in the image and worked well with most Pony based Checkpoints and on top of that the quality is insane.I want to learn this GODLY technique from you.

And also i've been focusing mainly on the pony XL model for now and i know it have some issues with the training unlike the SD1.5. From my understanding of finetuning there are only like 2 option in koya trainer the Dreambooth Finetuning and Finetuning Using LORA. But i've never tested those. And I have a plan to training an anime series with a maximum of 6 character for with all the style and stuffs but training it on just LORA will definitely overfit in one way or another. So i wanted to hear some advices from you that would suggest me a better solution.
Native Fine tuning would be the closest thing to training a checkpoint from the ground up.
Dreambooth training is close to native fine tuning only your creating a lora rather then training the entire checkpoint.


Ada factor, lion and prodigy are considered inferior to Adam however the resources needed are also less.
You can native fintune a SD 1.5 model with Lion with only 8GB of VRAM Adam quadruples that.

When fine tuning you also use the EMA weights vs the inference non ema weights checkpoint
(The SD 1.5 EMA is almost 8GB)

So an XL checkpoint like ends up around 13GB something most people can't even load let alone train with.
(Non EMA Juggernaut X is only 7.1GB)

For XL models both those options are out of reach for most people so Ill focus on the basic Lora.

Civitai can train a basic lora for PONY up to 10,000 steps (50,000 to 80,000 repeats depending on batch size limit).
For pony it has a batch size of 5 so that allows for lora's with an image count of 400-800 to train to convergence.

My rig would be 30/secs per it for a 5 batch size on 1024x1024, it will not train at 2048x2048
(Thus all but 2 of my XL trainings have been done by Civitai)

You can train a high rank lora then adjust it using Kohya, I never merge at full wieght with a lora it is usally .1-.2.
You can rank a lora down if it is over fitting. (You can rank up but it is not advised)

So in short with Civati allowing 2k training and around the number of steps needed to refine a checkpoint (100k or so) you can merge a high quaility lora into a checkpoint.

Extensive testing should be done before hand to make sure your not causing catastrophic loss to the model. I would never merge my tanlines model as I could not find enough images to make a model that doesnt create a beach scene when you ask for a cityscape.
 
Last edited:
  • Like
Reactions: devilkkw and XenoXe

XenoXe

New Member
Sep 8, 2021
10
4
Native Fine tuning would be the closest thing to training a checkpoint from the ground up.
Dreambooth training is close to native fine tuning only your creating a lora rather then training the entire checkpoint.


Ada factor, lion and prodigy are considered inferior to Adam however the resources needed are also less.
You can native fintune a SD 1.5 model with Lion with only 8GB of VRAM Adam quadruples that.

When fine tuning you also use the EMA weights vs the inference non ema weights checkpoint
(The SD 1.5 EMA is almost 8GB)

So an XL checkpoint like ends up around 13GB something most people can't even load let alone train with.
(Non EMA Juggernaut X is only 7.1GB)

For XL models both those options are out of reach for most people so Ill focus on the basic Lora.

Civitai can train a basic lora for PONY up to 10,000 steps (50,000 to 80,000 repeats depending on batch size limit).
For pony it has a batch size of 5 so that allows for lora's with an image count of 400-800 to train to convergence.

My rig would be 30/secs per it for a 5 batch size on 1024x1024, it will not train at 2048x2048
(Thus all but 2 of my XL trainings have been done by Civitai)

You can train a high rank lora then adjust it using Kohya, I never merge at full wieght with a lora it is usally .1-.2.
You can rank a lora down if it is over fitting. (You can rank up but it is not advised)

So in short with Civati allowing 2k training and around the number of steps needed to refine a checkpoint (100k or so) you can merge a high quaility lora into a checkpoint.

Extensive testing should be done before hand to make sure your not causing catastrophic loss to the model. I would never merge my tanlines model as I could not find enough images to make a model that doesnt create a beach scene when you ask for a cityscape.
Speaking of finetuning how do you do the training like kohya_ss or One traininer ect to get that native finetuning.