[Stable Diffusion] Prompt Sharing and Learning Thread

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Since XL is meant to be "better" at some things i wondered how it would be to create images in some XL model and then train a lora using those on a 1.5 model. If it worked it would if nothing else be a way to have "new" faces pretty easily as, lets call them default, faces in XL will be different than the ones we're very familiar with from 1.5.
So i generated a bunch of 1024x1024 images, since even that would be pushing my ability to train on them, dropped the ones with obvious problems/faults and started training...and noticed the expected time (stopwatch replaced with calendar) :p
Anyway, ran it for 8 epochs, sample images suggested the output was pretty consistent, so i stopped training and tested the loras and they showed a similarly pretty consistent "look".

This is 4 images generated on the trained model, :
View attachment 2878282

Did a test across some other models to see how it much of a difference there would be:
View attachment 2878283

Is there any interest in the lora?
I've got no idea what kind of issues it has with regards to flexibility etc, i just did a very basic training setup and since it worked seemingly ok there wasn't no point in doing anything more.

(Updated to add link etc)
View attachment 2878762
Adding image for advertisement?

, please no sale/profit usage, no claiming credit etc, just the usual respecting other ppls work :)
Thank you very much, very generous.
 

shkemba

Newbie
Jun 30, 2017
95
170
10 hours isn't as bad as you might think. My Kendra Lora took ap 16 hours with 3 epochs, 23 images, 768 res, 100 steps per image, 2300 steps in total per epoch, using slow learning rate and settings for "dampening".


Kendra Lora
View attachment 2816845
Can you please reupload the file? It seems to have been on Anonfiles (RIP). Thanks
 

felldude

Active Member
Aug 26, 2017
572
1,694
A for XLSD trained on images of the 4090 (If I had a 4090 maybe I could train at native res multiple epochs)
Although I did train the text enoder for this one and it did seem to help based of my experiments with clip


ComfyUI_00652_.jpeg
 
  • Like
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,571
3,768
Token Grouping vs Non-Grouping

So, (A),(B) are not the same as (A, B). Its kinda obvious, what's not obvious is where to use that in your prompts, i.e. what part of the rendering would benefit from such re-arrangement. In my case the model Zovya's RPGArtistTools wasn't sensitive to (middleaged) while other models were. I always wrote it off as a model's quirk. Then instead of having ~(woman)(middleaged) rewrote it as (woman, midleaged) and had the very response I was hoping for.

Went from this:
a_15278_.png
to this:
a_15284_.png
And here one more, although using a different model, with (oiled skin)(tan) vs (oiled skin, tan). Went from this:
a_15250_.png
to this:
a_15255_.png
 

HardcoreCuddler

Engaged Member
Aug 4, 2020
2,535
3,236
A for XLSD trained on images of the 4090 (If I had a 4090 maybe I could train at native res multiple epochs)
Although I did train the text enoder for this one and it did seem to help based of my experiments with clip


View attachment 2880075
sick design, actually.
Kinda offputting that this GPU has...are those USB ports? :))) AI's are awesome
 
  • Like
Reactions: felldude

KBAC

Newbie
Oct 17, 2021
17
1
How can this be fixed ?
OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 4.00 GiB total capacity; 2.19 GiB already allocated; 0 bytes free; 2.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
 

me3

Member
Dec 31, 2016
316
708
How can this be fixed ?
OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 4.00 GiB total capacity; 2.19 GiB already allocated; 0 bytes free; 2.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Depends on when you get that message. If it's when you're loading the model (and you aren't already starting with this option), you need to add --lowvram to the command when launching.
If it's when generating images, you can try reducing the width/height of the image.

By the looks of it you got a 4gb graphics card but it's only able to allocate 2.3gb, so there's something else running that's eating much of your it, you could try closing any other software that's running before launching and see if it can use more of the vram as well.
 
  • Like
Reactions: Mr-Fox

HardcoreCuddler

Engaged Member
Aug 4, 2020
2,535
3,236
How can this be fixed ?
OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 4.00 GiB total capacity; 2.19 GiB already allocated; 0 bytes free; 2.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
you need to generate smaller images. Best I could get with 4GB was around 512x512
 
  • Like
Reactions: onyx and Mr-Fox

KBAC

Newbie
Oct 17, 2021
17
1
Depends on when you get that message. If it's when you're loading the model (and you aren't already starting with this option), you need to add --lowvram to the command when launching.
If it's when generating images, you can try reducing the width/height of the image.

By the looks of it you got a 4gb graphics card but it's only able to allocate 2.3gb, so there's something else running that's eating much of your it, you could try closing any other software that's running before launching and see if it can use more of the vram as well.
The problem is that I have a video card with a large amount of memory.
 

me3

Member
Dec 31, 2016
316
708
The problem is that I have a video card with a large amount of memory.
would have been a lot easier if you'd actually said that from the start considering there's nothing in the error you pasted that suggest it.
What card is also rather relevant considering things work better/easier with nvidia.
Without knowing anything about your system, there's a possibility that there's an onboard gpu being used and not the actual graphics card, the card can be unsupported, or you might have not set it up correctly so the config isn't even set to use the card
 

Dagg0th

Member
Jan 20, 2022
280
2,751
How can this be fixed ?
OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 4.00 GiB total capacity; 2.19 GiB already allocated; 0 bytes free; 2.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
It says you have a 4GB memory GPU

check your task manager and confirm it
Screenshot_28.png

If true, try to generate a low resolution image,
 
  • Like
Reactions: Mr-Fox

HardcoreCuddler

Engaged Member
Aug 4, 2020
2,535
3,236
The problem is that I have a video card with a large amount of memory.
check that you're actually using that card for stable diffusion and not some integrated graphics. Idk how, check GPU usage in MSi afterburner when stable diffusion is running or something
 
  • Like
Reactions: Mr-Fox

felldude

Active Member
Aug 26, 2017
572
1,694
sick design, actually.
Kinda offputting that this GPU has...are those USB ports? :))) AI's are awesome
ComfyUI_00705_.jpg

Lol yeah

Well part of that is trying to train the duel text encoders CLIP-ViT/L & Open CLIP-ViT/G with a program that doesn't recommend it...With a new concept (GPU and graphics card prompts will draw motherboards, and all kinds of stuff in SD XL)

The other part was my training was brute forced with small latents at a very high learning rate, with one epoch
 
Last edited:

Sharinel

Active Member
Dec 23, 2018
598
2,509
The problem is that I have a video card with a large amount of memory.
Does the gfx card have a lot of memory, or does the system. Remember there is ram and vram - you might have 16gig of ram and only 4gig of vram. What model of gfx card do you have?
 
  • Like
Reactions: onyx and Mr-Fox

felldude

Active Member
Aug 26, 2017
572
1,694
Intel claims training at Full FP32 @ 467.7 Images a second with 116 Batch size on a $1600 dollar processor


I'm not sure what the latent size was but Intel is known for some low key amazing tech.
Disney used Intel to do all the graphics for Moana, those 3d models where in the terabytes and that is almost 10 years ago.

Acceleration

(Where I got the info from)

I've used AMD, Intel, ATI (Before the demise) NVidia products...I'll use what ever works.
 
  • Like
Reactions: DD3DD and Sepheyer