- Sep 26, 2017
- 397
- 623
I forgot how many models I tried but so far I'm using AngrA nim or Realtoon3d, something I experienced, whats good for training isn't necessarily good for generating and vice versa, when the training is done I use Photoon, real3d or DarkSun to see the result and not the models I trained with, these ones need so little prompt to show good result but not as good for training as the ones I used (if that makes sense)what models do ya'll use when you train your LoRA's?
If this is your sample images I think that your Lora is gonna be amazing, they are very good looking all of them.Looking over sample images generated during training i noticed that one i'd stopped and restarted had very different images than it was currently giving. The only thing changed between the two runs was the seed. It might not matter too much in the end for a completed training but given the seemingly very different starts and how we know seed affect things in generating, it would be surprising if it had no impact at all on the end result.
Top row is the sample image from the first 3 epochs of one run, second row is the same from a different run, ONLY difference is the seed
Just something it might be worth keeping in mind when training.
View attachment 2930257
As a sidenote, when i first saw that first image i had a feeling i'd seen that character/face somewhere, but i can't quite place it. It looks like it's from a cgi movie or a game and there's something very familiar about it. Image is also strangely detailed, sample prompt is basically just the trigger word, so it's "all decided by the AI". Composition etc all seems so intentional which make it even more strange...
looking at the code, seed does seem to be used for the sample image, HOWEVER it's also used to "seed"If this is your sample images I think that your Lora is gonna be amazing, they are very good looking all of them.
The seed is not used for the actual training. It's only used for generating the sample images. It shouldn't affect the end result in any way. However it will potentially guide you in adjusting any settings and could have a domino effect in this sense. As long as you keep it in mind and don't go by the sample images, instead rather go by the results from testing with SD to guide you in your settings adjustments, my earlier statement should hold true. I think that you can set it to "-1" if you want a randomized seed for the sample images. The purpose of having a static seed is to have something of a constant that you get samples from and can then judge the training easier.
random
. That means that ANYTHING using pythons random is affected by it. Purely having a quick glance through some of the code (so i might have misread some of these next parts), it will affect things like bucket shuffling, color augmenting, caption shuffle, caption drop out, image croping, flip augmentation and potentially other places i've overlooked and it doesn't include any potential other classes that might be involved.I'm code "dyslexic".. Can't read any code what so ever.looking at the code, seed does seem to be used for the sample image, HOWEVER it's also used to "seed"random
. That means that ANYTHING using pythons random is affected by it. Purely having a quick glance through some of the code (so i might have misread some of these next parts), it will affect things like bucket shuffling, color augmenting, caption shuffle, caption drop out, image croping, flip augmentation and potentially other places i've overlooked and it doesn't include any potential other classes that might be involved.
So by the looks of it, there's quite a few things it can influence...it should also make results potentially deterministic
A good seed for one thing can be a bad seed for another, we've all more than likely seen that when making images.I'm code "dyslexic".. Can't read any code what so ever.
Maybe I was mistaken and had the wrong impression. I was only going by what I have read or rather what I "remember" having read. I was also thinking from a sense of logic that the entire point of the source images is to "create" or influence the training. So I thought the seed would only be for sampling the "training" or the stage the Lora is in at that moment in training.
If the seed can have an influence over the end result then why is no guides or tutorials even mentioning it? I did only find a little about it in the guide on rentry I often refer to. It does say that seed can potentially have an effect but the optimizers should guard against it having a very bad effect. Though op had seen it happen and the fix was to change the seed in that scenario, so maybe you're on to something after all. He did state that it should be very rare. If it has potential to influence the training in an undesired direction this would be something to mention one would think. It should then be part of all guides hopefully, with some form of pointer to what seed we should use. If you find something more about this, either info or from your own observations. I would be very interested to know more.
You must be registered to see the links
(blurry:1.5)
to make it stronger. lighting makes a difference, e.g. rim lighting
, you can use certain artist styles for a strong effect, there are certain checkpoints and Lora's that have this effect. Don't use certain words like ultra detailed
, and there are several more. Install the extension with wildcards and look through them to get an impression what tokens have this effect, and which ones you should not use.thanks I ask some page I follow turn out he edit them in adobe lightroom to get that effectIf with softer you mean blurry, then blurry or bokeh, and weight like(blurry:1.5)
to make it stronger. lighting makes a difference, e.g.rim lighting
, you can use certain artist styles for a strong effect, there are certain checkpoints and Lora's that have this effect. Don't use certain words likeultra detailed
, and there are several more. Install the extension with wildcards and look through them to get an impression what tokens have this effect, and which ones you should not use.
style of Paul Barson, style of Oleg Oprisco, style of Brandon Woelfel, style of John Atkinson Grimshaw, style of Johan Hendrik Weissenbruch
Use the word "soft" .. in different descriptions such as "soft light", "soft shadow", "soft edge" also use "bokeh", "filmgrain". Try different samplers. Try "clarity" in negative, probably need to add/retract weight. "diffused light" has a softening effect. If you are ither using hiresfix or upscaling use ultrasharp for softer details. Nmkd gives more crisp edges.What prompt can make the overall of image softer
I try some less contrast or soft theme
there no difference
Did a plot of the first image. I don't have any of the loras or the vae so took them all out. Was left with this. :-Heh, that would work too
View attachment 2931451 View attachment 2931454 View attachment 2931456 View attachment 2932680 View attachment 2932678
Edit: The first three images had prompts that had a typo,a double ':' before a lora weight. No clear error and consequently none of the lora models actually loaded, so Images were good only due to the dreamshaperPixelart model in use (and prompt). When corrected the simulatneous loading of Lora's caused images to get garbled. I had to use the block weights extension to fix this. Now I get again some usable image, completely different style. Last two images are the result of this, not perfect, but I was trying to generate some corruption/sex toy 'inventory' screen.
You misunderstand me, I wasn't trying to replicate your picture but was just showing people the difference the checkpoint can make. I'll disagree with you on steps though, especially if you use a lot of loras or adetailer. I regularly get artifacts on lower steps, I find 40+ to be where those disappear.You are using a different checkpoint. The one I was using wasYou must be registered to see the links, also the seed is on CPU, you can set that in the settings. I have an AMD gpu and cpu is more platform independent. You shouldn't need the lora's because I made a mistake and none actually loaded Your images are fairly ok. To reproduce my last two images require theYou must be registered to see the linksextension, and those did use the Loras. Steps beyond 20 are fairly stable, usually. A parameter to play with, beside seed is CFG scale. Also if you have multiple checkpoints you could use that as Z axis.
Don't use too many different for each, or your matrix will end up being too big (another setting allows creating larger images, but better is to just do several queries in a row). Also the Agent scheduler is a nice extension to have.
View attachment 2933340You don't have permission to view the spoiler content. Log in or register now.
I run on cpu, though, so for me 40 steps is not really an option anyway. If lora's have issues with 20 I use the blocks weight extension, other tricks or drop them.if you use a lot of loras or adetailer I regularly get artifacts on lower steps, I find 40+ to be where those disappear.