... I personally would see it as a problem because there's no consistency between images. You can't make a character, save her for future use and in scenarios yet. ...
You kind of can, actually. It's not as perfect a recreation degree of reusing a 3d model, nor is it as simple as just bookmarking a character, but for at least a few months now it's been entirely possible to create LORAs (which are sort of like lenses to focus the image diffusion towards a specific category of output) for characters - you can make them for outfits, styles, scenarios, etc.
It's not as easy as just typing words into a box, but image diffusion is actually way more capable than it was even a year ago, if you're willing to learn the nitty-gritty of it. There are tools to get specific poses, to control where particular things are generated within an image, to fine-tune the style it generates in and control the output...
Of course, a lot of AI artists don't bother getting into the fine details because it requires learning, and if you're willing to generate repeatedly, you can eventually get something kind-of-sort-of like what you're going for just by random chance and prompting carefully. But I think of it like photography - it's easy to take lots and lots of shots without much control over what you're doing, and get some that look good by the law of large numbers, but someone who takes the time to learn to do it can much more consistently get much better pictures by using various other tools to control what they're doing.
TL;DR You
can have consistency between images, it's just more complicated than entry-level image generation.