View attachment 4782008
weekend so i played around again.
You need to realize that prompt is not strict instruction to generative AI. Every tag in positive/negative prompt increases/decreases probability to get desired result, if model know what tag means. There is a lot of tags that does't produce any response signal form model (example something weird like this "5K22v" or some rare words).
High strength values can cause many problems like hair color change, extra hand, overfitting noise and so on.
If AI doesn't know something, then you can't produce it (example Sora doesn't know how Sabia looks, so it produces image with wront hairstyle). Some tags may invoke more concepts than you want (triggers is a good example, they used to invoke all relative to LoRA concepts with differents strength).
If you want some rare view angle (concept), then you probably need to find/create some reference, then use it with img2img and/or controlnet. Also you may try to create specific LoRA to this view angle (concept), this should increase probability to get proper result.
"... and so on until me and mr stable diffusion meet on common ground but i feel robbed"
I have tried to create some scenes 200+ times and nothing) Now i try 10 times in average (with good prompts more, with bad less), if i don't get ddesired result, then i switched to another scene.
Current generation of AI's is another kind of lootboxes with some parameters to control)