- Dec 31, 2016
- 316
- 708
Clothing, hairstyle/length, "sizes" and loads of such things won't be consistent between seeds on the same model so only way they'll be consistent to a different model is if they are built on the same data and even then it's a matter of what else is involved.
Generate a few thousand images on the same prompt and you'll see there can be a massive variance within the same model.
It's also very common that words will be completely ignored, it doesn't mean the model "doesn't know them", if you look at the images and prompts shared just in this read you'll see it's rather common that there's things clearly specified in the prompt that's not in the image, but if you keep those in the prompt and remove other elements it'll "pop up".
If you suspect the model might not "understand" something you just need to run just that as a prompt (in some cases you might need a very minimal additional prompting), generate a somewhat large amount of images (can be small and low quality for speed) and see what the more consistent imagery is. You might end up being pretty surprised what actually works, i've had some more "technical/obscure" things show very well, while things we'd normally consider extremely basic fail completely.
If you want absolute consistency you're gonna need something trained, but relatively basic/common concept should work across models, specially if they are within the same "family". Generating across multiple models with multiple images in each, you'll see more variance within the seeds than models that's built on the same data. Unfortunately massive grids like that isn't easily shared here, but you're probably better off generating them yourself anyway so it fits your preferred models and way of prompting
Generate a few thousand images on the same prompt and you'll see there can be a massive variance within the same model.
It's also very common that words will be completely ignored, it doesn't mean the model "doesn't know them", if you look at the images and prompts shared just in this read you'll see it's rather common that there's things clearly specified in the prompt that's not in the image, but if you keep those in the prompt and remove other elements it'll "pop up".
If you suspect the model might not "understand" something you just need to run just that as a prompt (in some cases you might need a very minimal additional prompting), generate a somewhat large amount of images (can be small and low quality for speed) and see what the more consistent imagery is. You might end up being pretty surprised what actually works, i've had some more "technical/obscure" things show very well, while things we'd normally consider extremely basic fail completely.
If you want absolute consistency you're gonna need something trained, but relatively basic/common concept should work across models, specially if they are within the same "family". Generating across multiple models with multiple images in each, you'll see more variance within the seeds than models that's built on the same data. Unfortunately massive grids like that isn't easily shared here, but you're probably better off generating them yourself anyway so it fits your preferred models and way of prompting