The AI problem (coding or images or whatever) - it is trained on vast massives, significant part of which is a low-quality slop.
These images are pretty consistent but still have a certain "ai vibe".
There is much more nuance to that than 'it's bad because it is [always] trained on low quality slop' because the exact opposite is happening. There has been some big company foolish enough to scrounge reddit and just take every comment with 3 or more upvotes (than downvotes) at face value but those are training these big models that are supposed to know everything even if it is wrong.
That's why translators can use them because it's not about good or bad knowledge but about being formally right. They are trained on a lot of natural language and are supplied with grammar and syntax rules that outweigh the 'bad data' (like someone not being to write the language very well). Same applies with coding which has rigid rules.
What LLMs (for now) instead struggle with is retaining the overview, you can't just give them a lot of orders and have them all executed, that's where problems start to creep in. A dozen or two orders may be fine but you need more than that for making proper code and without having to solve a NameError, AttributeError, missing , or alike. The code itself that is written, at a great volume even, is fine but it has to fit the whole other code.
The reason why AI art is so similar, having the same style, is very simple: The art style itself isn't bad and very much favored while needing very little 'choice' (editing, polish). While oddities like too many or few fingers happened and may still do because people tend to use outdated stuff (like 'MTL'), it's not because the models copied stuff with just one star from sad panda but because small features are delicate and due to the nature of the process, overlaps happen but are not immediately solved (in older models).
Ask Gemini Image Generation for a completely full glass of wine, filled to the brim, and you will just get one normally full (well, Gemini Image Generation seems now to be done) no matter what you ask. You can't ask it for a set of pixel sprites either that face in a certain direction. Now while I would not now about the later, ChatGPT did boast about generating such a completely full glass of wine but alas to access that you'd need to pay... and you think that many people would pay for that?
We are mostly and very much in the realms of 'only free stuff' which makes people like DazedAnon a rarity who do use premium models thanks to crowdfunding and supporters. I have yet to look further into what Shisaye all does but from the looks of it all financial support goes into the tool rather than content / translation creation. Dazed's tool, well, others can technically use it but they would need technical affinity (no GUI) as well as either more of it or pay for the supported LLM models.
And to note again about LLM training and development: These huge models are fine-tuned with selected data to achieve spefiic task. Currently two problems are the resources required (need a lot of VRAM) and that shrunken (quantized) models may very well have incurred brain-damage that makes them unusable. The models I tried with 6 GB VRAM either had low quality (2B) or rambled (8B) hence why for now I see Gemini as the free option but again, feel free to nitpick my attempt. Maybe play both TLs side-by-side.
Remember, the new era of automation (AI Art and AI TL) has reached us after Covid. It's very young.
This may very well be the first AI CG thread here, 2 and a half years old.