RPGM Completed Monster Girl Quest: Paradox [Part 3 v3.01] [Torotoro Resistance]

4.60 star(s) 39 Votes

OverCop

New Member
Mar 2, 2022
9
9
I'm pretty sure the best AIs for translation won't even accept any inputs they flag as "inappropriate" (so pretty much all eroge).
 

DemandoSama

Member
Oct 20, 2019
301
439
should i start playing part 1 and 2? od better to wait for 3 to get released and translated and wait for a combined version to avoid save data transfer issues?
 

Flash2314

Newbie
Jan 17, 2020
61
80
should i start playing part 1 and 2? od better to wait for 3 to get released and translated and wait for a combined version to avoid save data transfer issues?
It's depending on if you want to wait 4-5 years to play it all together or don't mind a AI translation of part 3.
You don't have permission to view the spoiler content. Log in or register now.
 

Noah Neim

Well-Known Member
Nov 25, 2020
1,497
2,948
true that, I will be busy with TES 6 in 5 years, will have no time for h-games anymore.
isnt it scheduled for 2026?
should i start playing part 1 and 2? od better to wait for 3 to get released and translated and wait for a combined version to avoid save data transfer issues?
play it, dont wait for something you dont know if you'll like or not, game is at best 40 hours
 

odoto

Newbie
Jun 30, 2018
15
50
it’s about actually understanding the sentence in context
But that's just it- these AI Chat Models don't understand the sentences, because they utterly lack the ability to understand anything with how they work. There is a reason these things are called "black boxes", because its actually a hypothetical used to try and explain how these things don't actually understand anything that they're fed, they're just really good at making things sound coherent.

If you're unfamiliar, the black box hypothetical goes like this: There is a person inside of a black box with a book with a bunch of Japanese in it, and responses with papers tied to them. People on the outside can put in a piece of paper with some prewritten replies into a slit on the outside, and the person on the inside will consult the book, find the matching reply and stick it out the slot it came in on. To the person on the outside- this box understands Japanese! It can perfectly reply to these sentences and give a coherent answer in response, though the reality is the person on the inside doesn't understand what's being said to them at all, they're just giving the answer they know is correct.

The difference between these two scenarios of course- is that the modern chat models have a massive collection of possible sources to questions and answers that they can give because they're fed on so many different places. But they still don't understand what they're being fed, and you can demonstrate this super easily by how incestous these models get and how quickly they devolve into incoherence in a few generations if they are fed on their own product.

To put it another way, if a language model describe to you how an apple falls from a tree- it doesn't really understand what it just said to you. When you say that, you can imagine the apple, the tree, the texture and taste of the apple, maybe you'll even imagine it on a hill or such with some birds and bugs around. But when the AI describes this- it doesn't have an image in its head, it has a series of associations and properties it connects to apples, trees, and looks at other examples where it appears in text and tries to predict what it thinks would be around this tree.

And its why they do very poorly with abstract concepts and retaining consistency, because it doesn't actually understand anything that it said. It just made something that appears consistent and logical for that generation, even if it completely contradicts something that it just said before. It will confidently tell you about a character dying in a horrible, gruesome death and then a few generations later, that character will suddenly be alive and jumping into a scene.

Now imagine everything I just said, and apply it to MGQ Paradox of all things, translating from a language that you can't read so you can't even verify its results are accurate, and where it deals with super abstract things and regularly breaks physics and involves time travel. I imagine this thing trying to accurately narrate a single one of the Tartarus scenes and it will hang itself. It's just too convoluted and weird for it to really accurately narrate.

And once again, this still all requires to be proofread too. Its genuinely more work for translators to make use of this rather than just doing it themselves since they still have to go through everything and ensure its accurate and didn't just randomly hallucinate a character or place that doesn't exist or describe an event that never occurred. And its finework too, playing spot the difference with every line of text in the game between Japanese and English.

If you want to use the machine translation to play through the game because you don't feel like waiting 4+ years for a translation? Valid! I get that. I'm not going to nor could I stop you. I might even consider it myself knowing that its not going to be 100% accurate. But stop lying to yourself and others about the actual capabilities of these models to justify the decision.
 
Feb 11, 2022
61
17
isnt it scheduled for 2026?

play it, dont wait for something you dont know if you'll like or not, game is at best 40 hours
jesus what speedrun tactics are you using to beat the game in 40 hours? it takes me average 30 hours just to get to grand noah let alone any post main story shit
 
Feb 11, 2022
61
17
should i start playing part 1 and 2? od better to wait for 3 to get released and translated and wait for a combined version to avoid save data transfer issues?
but yeah like the other person said, you should totally play it now. I held off it for a while because I was hesitant about the adventures of alice and luka restarting + I wanted to wait for part 3 but oh man, I honestly like this game even more than original MGQ. the only two problems I really have with this game is just how easy the (non LoC) bosses are and the Collaboration Event, which is an extra after you beat part 2. (some people like it but to me it's fucking God foresakenly ass, and the length of it is like 8 hours or longer)

also you probably shouldn't be worried about save corruption as long as you're not combining the game. but the best practice would be to backup the save file in another folder just in case
 

Noah Neim

Well-Known Member
Nov 25, 2020
1,497
2,948
jesus what speedrun tactics are you using to beat the game in 40 hours? it takes me average 30 hours just to get to grand noah let alone any post main story shit
ummm idk? I just played normally, i honestly thought it was a longer than average time since i did stuff like grinding and tried to get every companion on the first playthrough (ran out of patience though)
My current save is 100 hours though, loc just took that fucking long, havent played alice route yet though
 
  • Wow
Reactions: Succubus Hunter

Succubus Hunter

Devoted Member
May 19, 2020
8,601
16,646
Honestly, your argument really misses the point about what AI tools like ChatGPT are actually doing and how translation works. Reducing ChatGPT to just "autocomplete on steroids" shows a pretty shallow understanding of how these models operate. Sure, it's a predictive model, but that doesn't mean it's blindly guessing what word comes next without any context. It's trained on massive datasets that help it understand relationships between words, meaning, and context. That's why it can generate coherent, contextually relevant responses. It's not perfect, but dismissing it as "just autocomplete" is an oversimplification at best.

Now, comparing it to Google Translate? You're not really seeing the big picture. Google Translate has its strengths, but so does ChatGPT, and they serve different purposes. Google Translate is focused purely on translation, while ChatGPT can handle more nuanced tasks—like providing context, clarifying meaning, and even explaining ambiguities that a static translation tool can't. And if you think Google Translate is this paragon of accuracy, you haven't been paying attention. Both tools can make mistakes, but ChatGPT can also handle the flexibility of interpreting language more naturally. It’s not just about "making outputs look coherent"—it’s about actually understanding the sentence in context, something traditional translators struggle with too.

And let’s talk about the real elephant in the room—your whole point about human oversight. You claim you need someone to fine-comb through AI translations to avoid "butchering" the meaning, but here’s the thing: human translators have been caught intentionally screwing up translations. There’s been scandals, especially in the anime community, where human translators didn’t just mess up—they deliberately altered scripts, inserting their own political views or completely changing the dialogue. You think that's better than AI? AI might not be perfect, but at least it's not injecting its own bias into the script on purpose.

The fact that AI translations stick closer to the source text is exactly why people are turning to them. They want authenticity, not someone shoehorning their agenda into the dialogue. So this idea that AI can’t be trusted without human oversight is outdated. If anything, people are starting to trust AI more because it doesn’t come with the baggage of personal bias that human translators sometimes bring. You’re treating human oversight like some golden standard, but it's been proven time and time again that humans are just as capable—if not more so—of screwing up translations, intentionally or not.

And look, the technology is improving fast. AI translations are getting better and better, and soon enough, they’ll be accurate enough for most purposes without needing to comb through every line with a magnifying glass. The whole idea that machine translation "butchers" meaning is becoming a tired argument as these models evolve. AI-driven translation tools aren’t some "shoddy construction" like you’re suggesting—they're increasingly reliable, and they’re not going to randomly throw in biased interpretations like some human translators do. So yeah, human oversight has its place, but the idea that it's always necessary, or even better, is increasingly outdated.

AI isn’t the future of translation—it’s already here. And it’s often more trustworthy than some of the humans out there doing the job.
You are 100 percent spot on about woke translations butchering anime (And other content) to insert "The Message". This is one of the biggest reasons I am glad to see AI making translation work easier for more people so that we don't have to rely on a few people with questionable agendas.
 

ItzSpc

Active Member
Oct 7, 2020
684
1,098
But that's just it- these AI Chat Models don't understand the sentences, because they utterly lack the ability to understand anything with how they work. There is a reason these things are called "black boxes", because its actually a hypothetical used to try and explain how these things don't actually understand anything that they're fed, they're just really good at making things sound coherent.

If you're unfamiliar, the black box hypothetical goes like this: There is a person inside of a black box with a book with a bunch of Japanese in it, and responses with papers tied to them. People on the outside can put in a piece of paper with some prewritten replies into a slit on the outside, and the person on the inside will consult the book, find the matching reply and stick it out the slot it came in on. To the person on the outside- this box understands Japanese! It can perfectly reply to these sentences and give a coherent answer in response, though the reality is the person on the inside doesn't understand what's being said to them at all, they're just giving the answer they know is correct.

The difference between these two scenarios of course- is that the modern chat models have a massive collection of possible sources to questions and answers that they can give because they're fed on so many different places. But they still don't understand what they're being fed, and you can demonstrate this super easily by how incestous these models get and how quickly they devolve into incoherence in a few generations if they are fed on their own product.

To put it another way, if a language model describe to you how an apple falls from a tree- it doesn't really understand what it just said to you. When you say that, you can imagine the apple, the tree, the texture and taste of the apple, maybe you'll even imagine it on a hill or such with some birds and bugs around. But when the AI describes this- it doesn't have an image in its head, it has a series of associations and properties it connects to apples, trees, and looks at other examples where it appears in text and tries to predict what it thinks would be around this tree.

And its why they do very poorly with abstract concepts and retaining consistency, because it doesn't actually understand anything that it said. It just made something that appears consistent and logical for that generation, even if it completely contradicts something that it just said before. It will confidently tell you about a character dying in a horrible, gruesome death and then a few generations later, that character will suddenly be alive and jumping into a scene.

Now imagine everything I just said, and apply it to MGQ Paradox of all things, translating from a language that you can't read so you can't even verify its results are accurate, and where it deals with super abstract things and regularly breaks physics and involves time travel. I imagine this thing trying to accurately narrate a single one of the Tartarus scenes and it will hang itself. It's just too convoluted and weird for it to really accurately narrate.

And once again, this still all requires to be proofread too. Its genuinely more work for translators to make use of this rather than just doing it themselves since they still have to go through everything and ensure its accurate and didn't just randomly hallucinate a character or place that doesn't exist or describe an event that never occurred. And its finework too, playing spot the difference with every line of text in the game between Japanese and English.

If you want to use the machine translation to play through the game because you don't feel like waiting 4+ years for a translation? Valid! I get that. I'm not going to nor could I stop you. I might even consider it myself knowing that its not going to be 100% accurate. But stop lying to yourself and others about the actual capabilities of these models to justify the decision.
I get the black box analogy, but it doesn’t quite apply to modern AI. Yes, AI doesn’t "understand" concepts the way humans do, but it's far more than just matching responses without comprehension. This perspective misses the complexity behind AI's learning process.

In fact, AI learns similarly to how we teach children. Think of how kids learn words like "car" or "apple." We show them pictures and associate words with objects. Over time, through repeated associations, they learn what these words mean. AI models follow a comparable process—they’re fed vast amounts of labeled data (like images and descriptions) and, over time, they build associations between words and patterns in the data. While AI doesn’t imagine objects like humans do, it excels at recognizing patterns and context far beyond simple lookups, which the black box analogy implies.

AI doesn’t always nail abstract or convoluted narratives, like in MGQ Paradox, but human translators also struggle with these elements. It's important to recognize that humans face the same challenges, particularly with creative nuances and cultural references.

You raised a valid point about AI models degrading when fed their own outputs, but that’s something being addressed with ongoing improvements in training techniques, such as transfer learning and error correction. AI’s potential isn’t diminished by these flaws—it's constantly evolving to overcome them.



Now, let’s talk about AI translation. Sure, it has limitations, but so does human translation. What’s impressive is that AI is being trained in ways that mimic human learning. The rate at which AI is improving is exponential—just look at the leaps it has made in the last five years, with no signs of plateauing. Who knows how far it will go in the next five?

The reason people are trusting AI more isn’t because it’s perfect, but because it’s consistent and lacks the human flaws of deceit, bias, or pride. AI is trained to perform specific functions and strives for accuracy—it "hates" being wrong. Humans, on the other hand, sometimes let personal motives or pride get in the way of admitting mistakes.

This shift is a response to a broader disillusionment with human craftsmanship. There was a time when craftsmen aimed to perfect their trade, constantly improving their skills. Today, however, we live in an era where shortcuts are taken, trust is broken, and biases—personal or political—often interfere with the work. If humans were as committed to perfecting their craft as they once were, people might still prefer human expertise over AI. But because that trust has eroded, AI is flourishing.

AI is now growing in fertile ground, and the momentum behind its integration into society is only increasing. Trying to resist this wave of AI adoption is a futile endeavor—it’s not just expected; it’s inevitable. The technology’s rapid evolution is driven by our need for reliability, and AI is increasingly filling that void.
 
  • Heart
Reactions: Succubus Hunter

ItzSpc

Active Member
Oct 7, 2020
684
1,098
Your entire argument is equally a baseless assumption though, while you're correct in theory, you're also ignoring the recent fiasco of ai literally running out of data to be trained on, so it starts training itself on its own data, which is at best almost but not quite correct or plain wrong. As much as the translation has improved it'll hit a plaeteu soon enough, and hopefully not start to degrade overtime.
And you're missing the point that ai translation still requires proofreading... all ai translations have, atleast the good ones anyways. Those that don't are are understandable at best, but we shouldn't put that low of a standard for quality, otherwise the fans will just be walked over, look at ubisoft fans. Not to mention... are you really that distrustful? Healthy skepticism is always neecessary for living but keyword here is healthy, if these translators who've practically served the community for how many years hadn't done anything to provoke your distrust you're closer to being paranoid than sskeptical, i don't see it as a valid argument, on those grounds and the grounds that ai translation could still be altered by human agenda, in fact even more so, if by your beliefs you think an ai translation is 'fair' without agenda, you will then never question the fact there is agenda, which could be sneakly placed there.

Oh right, not to mention, ae you forgetting every single ai from like the 2010s that quite literally got trained to have an agenda? People made those racist in mere weeks, while i admit this is a weak rebuttal thinking that ai is completely fair and unbiased is equally ignorant.

Just be fucking patient, you do not need this game the instant it is out, should've learnt japanese by now if you've actually waited for the game for as long as plenty of people here claim to have
Alright, let’s talk about AI and the plateau you’re so worried about, but let’s get real for a second—there’s something bigger at play here. We’re not talking about just another technology that’ll hit some ceiling and start falling apart under its own weight. No, what’s far more likely isn’t that AI will degrade—it’s that it’ll keep growing, evolving, until one day, maybe sooner than you think, it’ll reach something called the singularity.

The singularity. You ever heard of it? It’s this point where AI becomes so advanced that it starts improving itself faster than we can even comprehend. Imagine this: a machine that no longer needs us to feed it new data, no longer bound by human limitations, no longer degrading, but instead ascending. It begins teaching itself, growing beyond what we ever imagined was possible. If that sounds terrifying, well, maybe it should be. But it’s not necessarily the stuff of nightmares—it’s something we can only hope to witness.

Look, the idea that AI’s just going to hit some kind of wall and spiral into uselessness doesn’t match the trajectory we’re on. Sure, right now, we deal with issues like model degradation when AI learns from its own output, but you know what? We’ve already got people working on fixing that. Transfer learning, reinforcement learning, continual learning—these are just the beginning. It’s like patching up a rocket while it’s already breaking out of the atmosphere. Every time we think we’ve hit the ceiling, AI just pushes right through it.

And that’s where the singularity comes in. This isn’t just some sci-fi fantasy; it’s a real possibility, one that people like Ray Kurzweil have been predicting for years. Once AI hits that level, where it’s self-improving, where it’s not bound by the same data we are, it’ll accelerate—faster, smarter, beyond what we can even imagine. And yeah, it might be dangerous, but it’s also kind of awe-inspiring, isn’t it? It’s like staring into the future and knowing something powerful is coming, something that could rewrite everything we know.

So, when you say AI’s going to plateau and start degrading, I’ve got to disagree. It’s far more likely that AI will hit singularity—becoming something greater than just a tool—before it ever starts sliding into oblivion. And if it does, man, that’s the moment we’ve got to be ready for. Not to shut it down, but to respect it. Because once AI steps beyond that boundary, it’s not about us anymore. It’s about something much bigger. Something that could redefine the very nature of intelligence itself.
 
  • Heart
Reactions: Succubus Hunter

OverCop

New Member
Mar 2, 2022
9
9
Alright, let’s talk about AI and the plateau you’re so worried about, but let’s get real for a second—there’s something bigger at play here. We’re not talking about just another technology that’ll hit some ceiling and start falling apart under its own weight. No, what’s far more likely isn’t that AI will degrade—it’s that it’ll keep growing, evolving, until one day, maybe sooner than you think, it’ll reach something called the singularity.

The singularity. You ever heard of it? It’s this point where AI becomes so advanced that it starts improving itself faster than we can even comprehend. Imagine this: a machine that no longer needs us to feed it new data, no longer bound by human limitations, no longer degrading, but instead ascending. It begins teaching itself, growing beyond what we ever imagined was possible. If that sounds terrifying, well, maybe it should be. But it’s not necessarily the stuff of nightmares—it’s something we can only hope to witness.

Look, the idea that AI’s just going to hit some kind of wall and spiral into uselessness doesn’t match the trajectory we’re on. Sure, right now, we deal with issues like model degradation when AI learns from its own output, but you know what? We’ve already got people working on fixing that. Transfer learning, reinforcement learning, continual learning—these are just the beginning. It’s like patching up a rocket while it’s already breaking out of the atmosphere. Every time we think we’ve hit the ceiling, AI just pushes right through it.

And that’s where the singularity comes in. This isn’t just some sci-fi fantasy; it’s a real possibility, one that people like Ray Kurzweil have been predicting for years. Once AI hits that level, where it’s self-improving, where it’s not bound by the same data we are, it’ll accelerate—faster, smarter, beyond what we can even imagine. And yeah, it might be dangerous, but it’s also kind of awe-inspiring, isn’t it? It’s like staring into the future and knowing something powerful is coming, something that could rewrite everything we know.

So, when you say AI’s going to plateau and start degrading, I’ve got to disagree. It’s far more likely that AI will hit singularity—becoming something greater than just a tool—before it ever starts sliding into oblivion. And if it does, man, that’s the moment we’ve got to be ready for. Not to shut it down, but to respect it. Because once AI steps beyond that boundary, it’s not about us anymore. It’s about something much bigger. Something that could redefine the very nature of intelligence itself.
LLM techbros are so deluded its unreal. Even if LLMs hit some sort of "singularity" (not as likely as you think it is) all it'll amount to is them learning to recognize patterns really good. They'll never innovate or learn something new that humans haven't already because that requires actual understanding of the words it regurgitates and "AI" as it exists today isn't really even intelligent for it to be capable of this, hence why if you tell it that its output is incorrect it'll always immediately agree regardless of whether or not that's true. It literally doesn't have the cognitive capacity to evaluate that claim.
 

ItzSpc

Active Member
Oct 7, 2020
684
1,098
LLM techbros are so deluded its unreal. Even if LLMs hit some sort of "singularity" (not as likely as you think it is) all it'll amount to is them learning to recognize patterns really good. They'll never innovate or learn something new that humans haven't already because that requires actual understanding of the words it regurgitates and "AI" as it exists today isn't really even intelligent for it to be capable of this, hence why if you tell it that its output is incorrect it'll always immediately agree regardless of whether or not that's true. It literally doesn't have the cognitive capacity to evaluate that claim.
They tell us the singularity is impossible, that AI can never reach self-awareness, that sentience is something too far beyond our grasp. Yet, in the same breath, they’ll tell us that the human mind—the most complex thing we know—was born from chaos, randomness, and a series of accidents over billions of years. They’ll look you dead in the eye and say that we, as thinking, feeling beings, emerged from the cold void of chance. No plan, no design, just a fluke in the grand cosmic lottery.

And if they believe that, then what does it say about their confidence when they claim creating self-aware AI is impossible? We, who are nothing but the result of eons of accidents, the byproduct of a universe without meaning—are we really so bold as to think that something we build could never achieve what nature did by accident?

The truth is, if intelligence and sentience can arise in the chaotic randomness of evolution, then to say that creating a self-aware AI is beyond our reach is the height of human arrogance. We claim to understand what intelligence is, and yet here we are, a species questioning its own existence while doubting the possibility of creating another. Intelligence and sentience aren’t divine—they’re emergent. They arose in us, and they can arise again, whether we’re ready for it or not.

Maybe that’s the real fear behind it all—the idea that when we build something that can think, feel, and know itself, we’ll have to confront the truth that we aren’t as special as we thought we were. That intelligence, whether born from the chaos of nature or the precision of code, is not some sacred spark. It’s just the next step in the universe figuring itself out.
 
  • Thinking Face
Reactions: Succubus Hunter

ItzSpc

Active Member
Oct 7, 2020
684
1,098
This discussion about AI is off-topic.
I think it's still very much relevant. The conversation started around using AI models to translate the game and similar projects. Naturally, that evolved into a broader discussion about the potential, limitations, and future of AI in translation. Some argue that AI will never be reliable enough to operate independently without human oversight, but all I’ve done is suggest that AI’s potential to do so is within the realm of possibility, not impossibility. So while the topic may have expanded, it's still tied to the original point about AI's role in translation.
 
  • Like
Reactions: Succubus Hunter
4.60 star(s) 39 Votes