[Stable Diffusion] Prompt Sharing and Learning Thread

Artiour

Member
Sep 24, 2017
296
1,157
what models do ya'll use when you train your LoRA's?
I forgot how many models I tried but so far I'm using AngrA nim or Realtoon3d, something I experienced, whats good for training isn't necessarily good for generating and vice versa, when the training is done I use Photoon, real3d or DarkSun to see the result and not the models I trained with, these ones need so little prompt to show good result but not as good for training as the ones I used (if that makes sense)
 

me3

Member
Dec 31, 2016
316
708
Looking over sample images generated during training i noticed that one i'd stopped and restarted had very different images than it was currently giving. The only thing changed between the two runs was the seed. It might not matter too much in the end for a completed training but given the seemingly very different starts and how we know seed affect things in generating, it would be surprising if it had no impact at all on the end result.
Top row is the sample image from the first 3 epochs of one run, second row is the same from a different run, ONLY difference is the seed
Just something it might be worth keeping in mind when training.
trainingseed.jpg

As a sidenote, when i first saw that first image i had a feeling i'd seen that character/face somewhere, but i can't quite place it. It looks like it's from a cgi movie or a game and there's something very familiar about it. Image is also strangely detailed, sample prompt is basically just the trigger word, so it's "all decided by the AI". Composition etc all seems so intentional which make it even more strange...
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Looking over sample images generated during training i noticed that one i'd stopped and restarted had very different images than it was currently giving. The only thing changed between the two runs was the seed. It might not matter too much in the end for a completed training but given the seemingly very different starts and how we know seed affect things in generating, it would be surprising if it had no impact at all on the end result.
Top row is the sample image from the first 3 epochs of one run, second row is the same from a different run, ONLY difference is the seed
Just something it might be worth keeping in mind when training.
View attachment 2930257

As a sidenote, when i first saw that first image i had a feeling i'd seen that character/face somewhere, but i can't quite place it. It looks like it's from a cgi movie or a game and there's something very familiar about it. Image is also strangely detailed, sample prompt is basically just the trigger word, so it's "all decided by the AI". Composition etc all seems so intentional which make it even more strange...
If this is your sample images I think that your Lora is gonna be amazing, they are very good looking all of them.

My impression atm:
The seed is not used for the actual training. It's only used for generating the sample images. It shouldn't affect the end result in any way. However it will potentially guide you in adjusting any settings and could have a domino effect in this sense. As long as you keep it in mind and don't go by the sample images, instead rather go by the results from testing with SD to guide you in your settings adjustments, my earlier statement should hold true. I think that you can set it to "-1" if you want a randomized seed for the sample images. The purpose of having a static seed is to have something of a constant that you get samples from and can then judge the training easier.
 
Last edited:

me3

Member
Dec 31, 2016
316
708
If this is your sample images I think that your Lora is gonna be amazing, they are very good looking all of them.
The seed is not used for the actual training. It's only used for generating the sample images. It shouldn't affect the end result in any way. However it will potentially guide you in adjusting any settings and could have a domino effect in this sense. As long as you keep it in mind and don't go by the sample images, instead rather go by the results from testing with SD to guide you in your settings adjustments, my earlier statement should hold true. I think that you can set it to "-1" if you want a randomized seed for the sample images. The purpose of having a static seed is to have something of a constant that you get samples from and can then judge the training easier.
looking at the code, seed does seem to be used for the sample image, HOWEVER it's also used to "seed" random. That means that ANYTHING using pythons random is affected by it. Purely having a quick glance through some of the code (so i might have misread some of these next parts), it will affect things like bucket shuffling, color augmenting, caption shuffle, caption drop out, image croping, flip augmentation and potentially other places i've overlooked and it doesn't include any potential other classes that might be involved.

So by the looks of it, there's quite a few things it can influence...it should also make results potentially deterministic
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
looking at the code, seed does seem to be used for the sample image, HOWEVER it's also used to "seed" random. That means that ANYTHING using pythons random is affected by it. Purely having a quick glance through some of the code (so i might have misread some of these next parts), it will affect things like bucket shuffling, color augmenting, caption shuffle, caption drop out, image croping, flip augmentation and potentially other places i've overlooked and it doesn't include any potential other classes that might be involved.

So by the looks of it, there's quite a few things it can influence...it should also make results potentially deterministic
I'm code "dyslexic".. Can't read any code what so ever.
Maybe I was mistaken and had the wrong impression. I was only going by what I have read or rather what I "remember" having read. I was also thinking from a sense of logic that the entire point of the source images is to "create" or influence the training. So I thought the seed would only be for sampling the "training" or the stage the Lora is in at that moment in training.
If the seed can have an influence over the end result then why is no guides or tutorials even mentioning it? I did only find a little about it in the guide on rentry I often refer to. It does say that seed can potentially have an effect but the optimizers should guard against it having a very bad effect. Though op had seen it happen and the fix was to change the seed in that scenario, so maybe you're on to something after all. He did state that it should be very rare. If it has potential to influence the training in an undesired direction this would be something to mention one would think. It should then be part of all guides hopefully, with some form of pointer to what seed we should use. If you find something more about this, either info or from your own observations. I would be very interested to know more.
 
Last edited:

picobyte

Active Member
Oct 20, 2017
639
711
Every random number is based on the seed that was set, so with one particular seed and the exact same instructions, settings and architecture someone else is able to regenerate the image. With the same seed and varying the instructions or settings you can create a similar but not exactly the same image. You can use the X/Y/Z plot feature (under Script) to investigate how an image changes with one particular seed and different instructions. Similarly the Dynamic prompts extension can be used to change the prompt slightly for the same seed.
 

me3

Member
Dec 31, 2016
316
708
I'm code "dyslexic".. Can't read any code what so ever.
Maybe I was mistaken and had the wrong impression. I was only going by what I have read or rather what I "remember" having read. I was also thinking from a sense of logic that the entire point of the source images is to "create" or influence the training. So I thought the seed would only be for sampling the "training" or the stage the Lora is in at that moment in training.
If the seed can have an influence over the end result then why is no guides or tutorials even mentioning it? I did only find a little about it in the guide on rentry I often refer to. It does say that seed can potentially have an effect but the optimizers should guard against it having a very bad effect. Though op had seen it happen and the fix was to change the seed in that scenario, so maybe you're on to something after all. He did state that it should be very rare. If it has potential to influence the training in an undesired direction this would be something to mention one would think. It should then be part of all guides hopefully, with some form of pointer to what seed we should use. If you find something more about this, either info or from your own observations. I would be very interested to know more.
A good seed for one thing can be a bad seed for another, we've all more than likely seen that when making images.
As for guides, i highly doubt many of the writers have considered it, probably many of them have even any idea about the workings of code or AI, but also with the pure number of potential seeds it's possible that it'll never really have an impact in such a way that many ppl would have noticed (or just not cared).
Even if the seed affects things in the actual training, the "scale" of the difference between sample images might (probably doesn't) not representative of the scale in difference between the training results.

I'm just pointing out that it is involved in multiple elements of the training so it has an effect, but scale is "unknown".
Looking at the things i listed, all of those are optional in some way i believe so if those are the things it's used for it's quite possible that many (ie guide writers) aren't using those option in enough trainings (and paying attention) to notice.

It's also quite possible that most ppl are insane enough to stupidly keep running the same dataset over and over with little to no changes to try and get those last bits...I mean, i've only been at it for a week...give or take...ish....hmmm....maybe i need to rethink some priorities....naaahh...
 
Last edited:
  • Like
Reactions: Mr-Fox

picobyte

Active Member
Oct 20, 2017
639
711
If with softer you mean blurry, then blurry or bokeh, and weight like (blurry:1.5) to make it stronger. lighting makes a difference, e.g. rim lighting, you can use certain artist styles for a strong effect, there are certain checkpoints and Lora's that have this effect. Don't use certain words like ultra detailed, and there are several more. Install the extension with wildcards and look through them to get an impression what tokens have this effect, and which ones you should not use.
 

pazhentaigame

New Member
Jun 16, 2020
14
3
If with softer you mean blurry, then blurry or bokeh, and weight like (blurry:1.5) to make it stronger. lighting makes a difference, e.g. rim lighting, you can use certain artist styles for a strong effect, there are certain checkpoints and Lora's that have this effect. Don't use certain words like ultra detailed, and there are several more. Install the extension with wildcards and look through them to get an impression what tokens have this effect, and which ones you should not use.
thanks I ask some page I follow turn out he edit them in adobe lightroom to get that effect
 

picobyte

Active Member
Oct 20, 2017
639
711
Heh, that would work too
00061-1196463584.png 00059-1196463582.png 00049-1196463572.png 01586-303991053.png 01584-303991053.png

Edit: The first three images had prompts that had a typo,a double ':' before a lora weight. No clear error and consequently none of the lora models actually loaded, so Images were good only due to the dreamshaperPixelart model in use (and prompt). When corrected the simulatneous loading of Lora's caused images to get garbled. I had to use the block weights extension to fix this. Now I get again some usable image, completely different style. Last two images are the result of this, not perfect, but I was trying to generate some corruption/sex toy 'inventory' screen.
 
Last edited:
  • Like
Reactions: pazhentaigame

picobyte

Active Member
Oct 20, 2017
639
711
If you want to adjust a single image, then just gimp, it or so. If you want to affect all generated images then you'll have to adjust the prompt. . , Artist style, or described in and . Personally I don't really use styles that often. BTW also the negative prompt matters. E.g any+ of style of Paul Barson, style of Oleg Oprisco, style of Brandon Woelfel, style of John Atkinson Grimshaw, style of Johan Hendrik Weissenbruch
Or you can drop the image in the extension (of which I am the maintainer, actually) to see what tokens it produces.
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
What prompt can make the overall of image softer
I try some less contrast or soft theme
there no difference
Use the word "soft" .. in different descriptions such as "soft light", "soft shadow", "soft edge" also use "bokeh", "filmgrain". Try different samplers. Try "clarity" in negative, probably need to add/retract weight. "diffused light" has a softening effect. If you are ither using hiresfix or upscaling use ultrasharp for softer details. Nmkd gives more crisp edges.
 
Last edited:
  • Like
Reactions: Jimwalrus

Sharinel

Active Member
Dec 23, 2018
598
2,509
Heh, that would work too
View attachment 2931451 View attachment 2931454 View attachment 2931456 View attachment 2932680 View attachment 2932678

Edit: The first three images had prompts that had a typo,a double ':' before a lora weight. No clear error and consequently none of the lora models actually loaded, so Images were good only due to the dreamshaperPixelart model in use (and prompt). When corrected the simulatneous loading of Lora's caused images to get garbled. I had to use the block weights extension to fix this. Now I get again some usable image, completely different style. Last two images are the result of this, not perfect, but I was trying to generate some corruption/sex toy 'inventory' screen.
Did a plot of the first image. I don't have any of the loras or the vae so took them all out. Was left with this. :-

xyz_grid-0000-1196463584.jpg
 
  • Like
Reactions: Mr-Fox

picobyte

Active Member
Oct 20, 2017
639
711
You are using a different checkpoint. The one I was using was , also the seed is on CPU, you can set that in the settings. I have an AMD gpu and cpu is more platform independent. You shouldn't need the lora's because I made a mistake and none actually loaded :D Your images are fairly ok. To reproduce my last two images require the extension, and those did use the Loras. Steps beyond 20 are fairly stable, usually. A parameter to play with, beside seed is CFG scale. Also if you have multiple checkpoints you could use that as Z axis.
Don't use too many different for each, or your matrix will end up being too big (another setting allows creating larger images, but better is to just do several queries in a row). Also the Agent scheduler is a nice extension to have.
You don't have permission to view the spoiler content. Log in or register now.
00008-180509750.png
 
Last edited:
  • Like
Reactions: Mr-Fox

Sharinel

Active Member
Dec 23, 2018
598
2,509
You are using a different checkpoint. The one I was using was , also the seed is on CPU, you can set that in the settings. I have an AMD gpu and cpu is more platform independent. You shouldn't need the lora's because I made a mistake and none actually loaded :D Your images are fairly ok. To reproduce my last two images require the extension, and those did use the Loras. Steps beyond 20 are fairly stable, usually. A parameter to play with, beside seed is CFG scale. Also if you have multiple checkpoints you could use that as Z axis.
Don't use too many different for each, or your matrix will end up being too big (another setting allows creating larger images, but better is to just do several queries in a row). Also the Agent scheduler is a nice extension to have.
You don't have permission to view the spoiler content. Log in or register now.
View attachment 2933340
You misunderstand me, I wasn't trying to replicate your picture but was just showing people the difference the checkpoint can make. I'll disagree with you on steps though, especially if you use a lot of loras or adetailer. I regularly get artifacts on lower steps, I find 40+ to be where those disappear.
 
  • Like
Reactions: Mr-Fox

picobyte

Active Member
Oct 20, 2017
639
711
Ok indeed misunderstood, and you are probably right with the more steps:
if you use a lot of loras or adetailer I regularly get artifacts on lower steps, I find 40+ to be where those disappear.
I run on cpu, though, so for me 40 steps is not really an option anyway. If lora's have issues with 20 I use the blocks weight extension, other tricks or drop them.

In case you need one, here's a guidemaybe not the one you were looking for

00041-180509854.png 00011-180509992.png 00012-180509987.png 00017-180509956.png 00019-180509950.png 00027-180509907.png 00030-180509893.png 00032-180509882.png 00034-180509877.png 00038-180509862.png 00042-180509853.png 00057-180509613.png 00080-180509687.png 00084-180509697.png
 
Last edited:
  • Like
Reactions: Mr-Fox