No its not, because it's not even designed as a Translation Tool. What ChatGPT fundamentally is, is autocomplete on steroids and is just very good at predicting what it thinks comes next in a sentence. It doesn't actually understand the meaning of what its being fed and try to use other words that more accurately convey what's being said even if its less 1-1 with the original text. Compared to Google Translate or whatever its night and day- but that's only because ChatGPT tries to make its outputs look coherent, not actually ensure they're accurate. It's not like the consumer can tell- if they could read the language, they wouldn't need a translator.
To be clear, even in spite of what I just said, I originally had the same idea, that with modern translation tools the process would be sped up, because even though they're not 100% accurate at least they're accurate enough and can be used to do filler work, right? But then I thought about it for more than two seconds and realized the entire idea doesn't work. Even if you machine translate it all, you still need somebody to read through it all and make sure that it lines up and didn't horrifically butcher the intended meaning of something- and even worse, its finecombing work, meaning they don't just get to write the sentences from scratch and know for sure they're good, they need to read every single line of dialogue in the game twice and then essentially play "spot the difference" for every single one. Nightmare.
It's like the difference between building a home from scratch, and building on top of someone else's shoddy construction. The latter might sound easier, but then you realize they need to doublecheck everything that was already built to ensure its up to code and work around the things that are already there without causing the entire thing to collapse. Its why reconstruction work is often more expensive than just building something from scratch. When you build it all yourself, you know everything is done right and why/where everything is and don't need to reverse engineer every design decision made.
Honestly, your argument really misses the point about what AI tools like ChatGPT are actually doing and how translation works. Reducing ChatGPT to just "autocomplete on steroids" shows a pretty shallow understanding of how these models operate. Sure, it's a predictive model, but that doesn't mean it's blindly guessing what word comes next without any context. It's trained on massive datasets that help it understand relationships between words, meaning, and context. That's why it can generate coherent, contextually relevant responses. It's not perfect, but dismissing it as "just autocomplete" is an oversimplification at best.
Now, comparing it to Google Translate? You're not really seeing the big picture. Google Translate has its strengths, but so does ChatGPT, and they serve different purposes. Google Translate is focused purely on translation, while ChatGPT can handle more nuanced tasks—like providing context, clarifying meaning, and even explaining ambiguities that a static translation tool can't. And if you think Google Translate is this paragon of accuracy, you haven't been paying attention. Both tools can make mistakes, but ChatGPT can also handle the flexibility of interpreting language more naturally. It’s not just about "making outputs look coherent"—it’s about actually understanding the sentence in context, something traditional translators struggle with too.
And let’s talk about the real elephant in the room—your whole point about human oversight. You claim you need someone to fine-comb through AI translations to avoid "butchering" the meaning, but here’s the thing:
human translators have been caught intentionally screwing up translations. There’s been scandals, especially in the anime community, where human translators didn’t just mess up—they deliberately altered scripts, inserting their own political views or completely changing the dialogue. You think that's better than AI? AI might not be perfect, but at least it's not injecting its own bias into the script on purpose.
The fact that AI translations stick closer to the source text is exactly why people are turning to them. They want authenticity, not someone shoehorning their agenda into the dialogue. So this idea that AI can’t be trusted without human oversight is outdated. If anything, people are starting to trust AI more because it doesn’t come with the baggage of personal bias that human translators sometimes bring. You’re treating human oversight like some golden standard, but it's been proven time and time again that humans are just as capable—if not more so—of screwing up translations, intentionally or not.
And look, the technology is improving fast. AI translations are getting better and better, and soon enough, they’ll be accurate enough for most purposes without needing to comb through every line with a magnifying glass. The whole idea that machine translation "butchers" meaning is becoming a tired argument as these models evolve. AI-driven translation tools aren’t some "shoddy construction" like you’re suggesting—they're increasingly reliable, and they’re not going to randomly throw in biased interpretations like some human translators do. So yeah, human oversight has its place, but the idea that it's always necessary, or even better, is increasingly outdated.
AI isn’t the future of translation—it’s already here. And it’s often more trustworthy than some of the humans out there doing the job.