[Stable Diffusion] Prompt Sharing and Learning Thread

me3

Member
Dec 31, 2016
316
708
every image... nude leg to just me drawing from scratch
every single option of controlnet depth line art etc
every checkpoint like 10+ of them that I experiment
the tights just always popup sometimes somehow
ahh... sorry, I would never bring any question on
My work just too much complicate like create porn comic
and it has some specific cloth design in it
so there are too many factors to be the problem
thanks anyway
if it happens with almost everything it should be possible to great a random image to share so it will be easier to help you figure out what's wrong. We can help figure out if it's with the prompt or coming from a different thing. If you don't want to share what you're specifically working any similarly prompted image where the tights are a problem would work.

Also, by any chance are you using a language translation mod/extension for the prompt? If so that could be causing the confusion, there's a lot of words in languages that have multiple meanings and multiple translations. Just thing of a simple word like "mine" in English which has at least 3 very different meanings: as "ownership" - that thing is mine, a type of weapon/explosive, and gold/silver/coal/etc mine.
 

felldude

Active Member
Aug 26, 2017
512
1,502
To early...nah

For my machine I generate 768x768 batch size of 2 until I find an Image that looks decent
Then in-paint the upscale image (Natively) at 1536x1536 (With the WAS suite upscale)

I have the impact pack and was suite installed to do the full 4k upscale with pipelines and mask and all that but have never bothered to do it.

ComfyUI_00276_.png

For a time reference on a 3050 the 2x batch of 768x768 takes 24 seconds
The in-paint of the 1536x1536 takes 70 seconds
 
  • Like
Reactions: Mr-Fox

rogue_69

Newbie
Nov 9, 2021
79
253
Daz render (model uses textures created in Stable Diffusion), then putting the frames back through Stable to get the face I really wanted, then used Flow Frames to smooth out the animation a bit. I tried it with Roop, but liked the results without it better. I used a 0.2 denoiser strength. ezgif.com-gif-maker (1).gif ezgif.com-video-to-gif (2).gif
 

me3

Member
Dec 31, 2016
316
708
Daz render (model uses textures created in Stable Diffusion), then putting the frames back through Stable to get the face I really wanted, then used Flow Frames to smooth out the animation a bit. I tried it with Roop, but liked the results without it better. I used a 0.2 denoiser strength. View attachment 2926980 View attachment 2926978
(warning, post potentially horrible at getting the intended intention across)
don't know if it's intentional or due to image compression etc, but quality wise (graphically) it seems very low compared to what we know you can get from AI. It still looks very much like a "low textured daz", if that makes sense. I'm not having a go at the work etc, i'm just not getting the sense that you're getting or taking advantage of the level of detail/quality you could be getting from the AI
 

rogue_69

Newbie
Nov 9, 2021
79
253
(warning, post potentially horrible at getting the intended intention across)
don't know if it's intentional or due to image compression etc, but quality wise (graphically) it seems very low compared to what we know you can get from AI. It still looks very much like a "low textured daz", if that makes sense. I'm not having a go at the work etc, i'm just not getting the sense that you're getting or taking advantage of the level of detail/quality you could be getting from the AI
I compressed it to turn it into a GIF, so it could be that. I'll try to get a link to an actual MP4 of it when I get off work. I'm interested to hear your opinion on that. I'm still in the experimenting stage right meow.
 
  • Like
Reactions: Mr-Fox

Artiour

Member
Sep 24, 2017
261
1,093
what models do ya'll use when you train your LoRA's?
I forgot how many models I tried but so far I'm using AngrA nim or Realtoon3d, something I experienced, whats good for training isn't necessarily good for generating and vice versa, when the training is done I use Photoon, real3d or DarkSun to see the result and not the models I trained with, these ones need so little prompt to show good result but not as good for training as the ones I used (if that makes sense)
 

me3

Member
Dec 31, 2016
316
708
Looking over sample images generated during training i noticed that one i'd stopped and restarted had very different images than it was currently giving. The only thing changed between the two runs was the seed. It might not matter too much in the end for a completed training but given the seemingly very different starts and how we know seed affect things in generating, it would be surprising if it had no impact at all on the end result.
Top row is the sample image from the first 3 epochs of one run, second row is the same from a different run, ONLY difference is the seed
Just something it might be worth keeping in mind when training.
trainingseed.jpg

As a sidenote, when i first saw that first image i had a feeling i'd seen that character/face somewhere, but i can't quite place it. It looks like it's from a cgi movie or a game and there's something very familiar about it. Image is also strangely detailed, sample prompt is basically just the trigger word, so it's "all decided by the AI". Composition etc all seems so intentional which make it even more strange...
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Looking over sample images generated during training i noticed that one i'd stopped and restarted had very different images than it was currently giving. The only thing changed between the two runs was the seed. It might not matter too much in the end for a completed training but given the seemingly very different starts and how we know seed affect things in generating, it would be surprising if it had no impact at all on the end result.
Top row is the sample image from the first 3 epochs of one run, second row is the same from a different run, ONLY difference is the seed
Just something it might be worth keeping in mind when training.
View attachment 2930257

As a sidenote, when i first saw that first image i had a feeling i'd seen that character/face somewhere, but i can't quite place it. It looks like it's from a cgi movie or a game and there's something very familiar about it. Image is also strangely detailed, sample prompt is basically just the trigger word, so it's "all decided by the AI". Composition etc all seems so intentional which make it even more strange...
If this is your sample images I think that your Lora is gonna be amazing, they are very good looking all of them.

My impression atm:
The seed is not used for the actual training. It's only used for generating the sample images. It shouldn't affect the end result in any way. However it will potentially guide you in adjusting any settings and could have a domino effect in this sense. As long as you keep it in mind and don't go by the sample images, instead rather go by the results from testing with SD to guide you in your settings adjustments, my earlier statement should hold true. I think that you can set it to "-1" if you want a randomized seed for the sample images. The purpose of having a static seed is to have something of a constant that you get samples from and can then judge the training easier.
 
Last edited:

me3

Member
Dec 31, 2016
316
708
If this is your sample images I think that your Lora is gonna be amazing, they are very good looking all of them.
The seed is not used for the actual training. It's only used for generating the sample images. It shouldn't affect the end result in any way. However it will potentially guide you in adjusting any settings and could have a domino effect in this sense. As long as you keep it in mind and don't go by the sample images, instead rather go by the results from testing with SD to guide you in your settings adjustments, my earlier statement should hold true. I think that you can set it to "-1" if you want a randomized seed for the sample images. The purpose of having a static seed is to have something of a constant that you get samples from and can then judge the training easier.
looking at the code, seed does seem to be used for the sample image, HOWEVER it's also used to "seed" random. That means that ANYTHING using pythons random is affected by it. Purely having a quick glance through some of the code (so i might have misread some of these next parts), it will affect things like bucket shuffling, color augmenting, caption shuffle, caption drop out, image croping, flip augmentation and potentially other places i've overlooked and it doesn't include any potential other classes that might be involved.

So by the looks of it, there's quite a few things it can influence...it should also make results potentially deterministic
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
looking at the code, seed does seem to be used for the sample image, HOWEVER it's also used to "seed" random. That means that ANYTHING using pythons random is affected by it. Purely having a quick glance through some of the code (so i might have misread some of these next parts), it will affect things like bucket shuffling, color augmenting, caption shuffle, caption drop out, image croping, flip augmentation and potentially other places i've overlooked and it doesn't include any potential other classes that might be involved.

So by the looks of it, there's quite a few things it can influence...it should also make results potentially deterministic
I'm code "dyslexic".. Can't read any code what so ever.
Maybe I was mistaken and had the wrong impression. I was only going by what I have read or rather what I "remember" having read. I was also thinking from a sense of logic that the entire point of the source images is to "create" or influence the training. So I thought the seed would only be for sampling the "training" or the stage the Lora is in at that moment in training.
If the seed can have an influence over the end result then why is no guides or tutorials even mentioning it? I did only find a little about it in the guide on rentry I often refer to. It does say that seed can potentially have an effect but the optimizers should guard against it having a very bad effect. Though op had seen it happen and the fix was to change the seed in that scenario, so maybe you're on to something after all. He did state that it should be very rare. If it has potential to influence the training in an undesired direction this would be something to mention one would think. It should then be part of all guides hopefully, with some form of pointer to what seed we should use. If you find something more about this, either info or from your own observations. I would be very interested to know more.
 
Last edited:

picobyte

Active Member
Oct 20, 2017
639
689
Every random number is based on the seed that was set, so with one particular seed and the exact same instructions, settings and architecture someone else is able to regenerate the image. With the same seed and varying the instructions or settings you can create a similar but not exactly the same image. You can use the X/Y/Z plot feature (under Script) to investigate how an image changes with one particular seed and different instructions. Similarly the Dynamic prompts extension can be used to change the prompt slightly for the same seed.
 

me3

Member
Dec 31, 2016
316
708
I'm code "dyslexic".. Can't read any code what so ever.
Maybe I was mistaken and had the wrong impression. I was only going by what I have read or rather what I "remember" having read. I was also thinking from a sense of logic that the entire point of the source images is to "create" or influence the training. So I thought the seed would only be for sampling the "training" or the stage the Lora is in at that moment in training.
If the seed can have an influence over the end result then why is no guides or tutorials even mentioning it? I did only find a little about it in the guide on rentry I often refer to. It does say that seed can potentially have an effect but the optimizers should guard against it having a very bad effect. Though op had seen it happen and the fix was to change the seed in that scenario, so maybe you're on to something after all. He did state that it should be very rare. If it has potential to influence the training in an undesired direction this would be something to mention one would think. It should then be part of all guides hopefully, with some form of pointer to what seed we should use. If you find something more about this, either info or from your own observations. I would be very interested to know more.
A good seed for one thing can be a bad seed for another, we've all more than likely seen that when making images.
As for guides, i highly doubt many of the writers have considered it, probably many of them have even any idea about the workings of code or AI, but also with the pure number of potential seeds it's possible that it'll never really have an impact in such a way that many ppl would have noticed (or just not cared).
Even if the seed affects things in the actual training, the "scale" of the difference between sample images might (probably doesn't) not representative of the scale in difference between the training results.

I'm just pointing out that it is involved in multiple elements of the training so it has an effect, but scale is "unknown".
Looking at the things i listed, all of those are optional in some way i believe so if those are the things it's used for it's quite possible that many (ie guide writers) aren't using those option in enough trainings (and paying attention) to notice.

It's also quite possible that most ppl are insane enough to stupidly keep running the same dataset over and over with little to no changes to try and get those last bits...I mean, i've only been at it for a week...give or take...ish....hmmm....maybe i need to rethink some priorities....naaahh...
 
Last edited:
  • Like
Reactions: Mr-Fox