[Stable Diffusion] Prompt Sharing and Learning Thread

Synalon

Member
Jan 31, 2022
202
626
I didn't notice much of a difference with the 3 clip SD3 encoder that comfy had a heads up on. Then again I am using the model with only 2 TE's so...

Have you tried the FP16 version they posted
Currently I'm just using sd3_medium_incl_clips_t5xxlfp8.safetensor, If you can link me to the FP16 version I'll give it a try.
I just downloaded and installed the clip files and FP16 I'll give it a try now.
 
Last edited:

felldude

Active Member
Aug 26, 2017
500
1,477
Currently I'm just using sd3_medium_incl_clips_t5xxlfp8.safetensor, If you can link me to the FP16 version I'll give it a try.
I'm still fairly new to Comfy, even though I've had it for a while I don't use it much so if I need to adjust the workflow a lot for it link me to a how to guide as well please.


This image has SD3 Clip just delete the lora and change the checkpoint to base SD3

ComfyUI_00013_.png
 
  • Like
Reactions: VanMortis

Synalon

Member
Jan 31, 2022
202
626


This image has SD3 Clip just delete the lora and change the checkpoint to base SD3

View attachment 3732948
TY I think I have it sorted out now. The image is using FP16.

You don't have permission to view the spoiler content. Log in or register now.

I changed the sampler and scheduler as an experiment, the image came out ok but could be better.

Do you know where to install samplers and schedulers in comfy, I have way more in forge I can copy over to try.
 
Last edited:

felldude

Active Member
Aug 26, 2017
500
1,477
TY I think I have it sorted out now. The image is using FP16.

You don't have permission to view the spoiler content. Log in or register now.

I changed the sampler and scheduler as an experiment, the image came out ok but could be better.

Do you know where to install samplers and schedulers in comfy, I have way more in forge I can copy over to try.
Comfy should have most of the new samplers the turbo one is in a different list and you can use the custom sampler to make your own setting offset and complex things.

My understanding is that SD3 was only trained to work with Euler, also they claim to have removed, or maybe censored in some way most of the adult content that was in the 5 billion image set.

I am assuming they used AI tools to tag the and prune the data set down to a mere 1 billion.

If I did the math right with a batch size of 48 if you could maintain 10IT/s you could train that model in 578 hours
 
Last edited:

Synalon

Member
Jan 31, 2022
202
626
The nude stuff certainly isn't working but I have had some reasonable outputs considering the minimal prompts I've used while testing different schedulers and samplers.

This one is using DDim as a Sampler and DDim_uniform as a scheduler.

Its not good, but at least its another direction to experiment with.

SD3_00080_.png
 

felldude

Active Member
Aug 26, 2017
500
1,477
With all the experimenting and work I was doing on SD3 I was able to fit a finetuning in overnight, with everything optimized I was able to do a full native fine tuning of SD 1.5

(Yeah not SD3 Im not even sure I could fine tune the UNET only diffusers)
But maybe with the new offloading dynamic loading stuff Microsoft or IBM is developing.
If I told anyone in 2022 that I did a finetune with Adam-8 bit with full accumulated and parallel distribution on a 8GB card.....


My thoughts:

Epic Realism, Realistic Vision, Absolute reality, Juggernaut, they all are using the same base training as a start and its not SD 1.5-EMA
(Probably one of those models did use SD1.5 but that might be lost to time)

If you take the prompts from the following images from my fine tune (They are not perfect)
Run them in any of those checkpoints with the full list of negatives, you will see a pattern.

I thought AI girl was part of SD 1.5, its my opinion it's just a symptom of fine-tuning a finetune at best.

ComfyUI_00172_.png ComfyUI_00170_.png ComfyUI_00167_.png ComfyUI_00166_.png ComfyUI_00164_.png
 
Last edited:

Sharinel

Member
Dec 23, 2018
498
2,060
With all the experimenting and work I was doing on SD3 I was able to fit a finetuning in overnight, with everything optimized I was able to do a full native fine tuning of SD 1.5

(Yeah not SD3 Im not even sure I could fine tune the UNET only diffusers)
But maybe with the new offloading dynamic loading stuff Microsoft or IBM is developing.
If I told anyone in 2022 that I did a finetune with Adam-8 bit with full accumulated and parallel distribution on a 8GB card.....


My thoughts:

Epic Realism, Realistic Vision, Absolute reality, Juggernaut, they all are using the same base training as a start and its not SD 1.5-EMA
(Probably one of those models did use SD1.5 but that might be lost to time)

If you take the prompts from the following images from my fine tune (They are not perfect)
Run them in any of those checkpoints with the full list of negatives, you will see a pattern.

I thought AI girl was part of SD 1.5, its my opinion it's just a symptom of fine-tuning a finetune at best.

View attachment 3735222 View attachment 3735223 View attachment 3735224 View attachment 3735225 View attachment 3735226
Every single word of that was in English, and I understood none of it :)

However there were boobas at the end so good job!
 

felldude

Active Member
Aug 26, 2017
500
1,477
Every single word of that was in English, and I understood none of it :)

However there were boobas at the end so good job!
Lol, I am saying that it appears the most popular checkpoints are copy paste of one good training, with very little difference.
But the loss on some of the models is pretty high thus the 2GB