konstant61
Member
- May 3, 2017
- 226
- 85
- 291
I understand the struggle completely. Since we are in the same boat, I hope you can understand the limitations better.I'm also currently unemployed...and I also can't afford many things...
This error is not caused by RenLocalizer itself, but by how you edited the file afterward.This is the Unicode error that I meant...when I corrected some translation errors in strings.rpy...
You are right, standard UTF-8 is usually sufficient and is the industry standard. I only suggested BOM as an alternative because some specific text editors on Windows sometimes handle it better.As far as I know, the standard Unicode encoding is simply UTF-8. I don't know why you installed UTF-8 BOM...
I'm glad to hear that.I saved the files in standard UTF-8 mode...without any additions...
P.S. UTF-8 BOM works...Game is running...
I faced the same issue myself: sometimes relying on standard decompilers (like UnRen) or parsing only .rpy files isn't enough.Hello. I'm facing a dilemma. I use a script bat UnRen or renpy-sdk for language parsing, but in some cases, depending on the quirks of the novel's developer, they don't extract the files to the tl folder. What if I add a separate parser to this program, with an input field for the desired language, but a more in-depth analysis is required. Is that possible? I would be grateful.
Honestly, I'm not a programmer either. I’d describe myself as a "vibecoder" since I build everything 100% using AI too. I know exactly how painful it is to be stuck with dumb or limited models when the credits run out—it really kills the workflow.I would gladly use it, but I'm not a programmer. I use AI, but I've used up my two-week limit in the VS Code program, and the alternative AI there isn't as advanced, and I can't afford to pay, given the situation in Ukraine.
It's one thing to pay a couple of bucks for the OpenRouter API and translate a dozen novels, but it's another thing to pay these corporations many times more.
Browser-based Gemini or Copilot can give useful advice, but it's difficult to shuffle code when you don't understand it, and the result is some kind of bullshit.
Therefore, all hope rests on you, if you understand this or have the means to do so.
I tested Deepl using the free API and OpenRouter using the paid API, given my lack of knowledge about your program's features. But look what I found out:
1. If I parse files and translate them into Translator++ using any source, then when an update comes out, I just translate the new ones.
2. Your program really works great even with Google Translate or any other translation source, but you can't continue, only do a complete re-translation, but it's fast.
I will probably use your program only for some new novels or where there is little text, but since I have already translated about 200 pieces in translator++, I will have to continue using it.
I may test the local AI from the Jan program, but it is unclear when this will occur, given the electricity issues.
I’m very sorry about your situation. I was simply sharing my experience. I thought every bit of feedback counts. For now, I’d be happy even if the DeepL translation worked perfectly.Dude, I'd love to use local AI, but there are three problems:
1. There's a war going on in the country. We're being bombed. The electricity is cut off for 12 hours a day.
2. My laptop can't handle a powerful model, such as gemini-2b.
3. I'm translating into Russian, and not every model can do that; I need accurate prompt.
Does it already work with the free DeepL API in the latest version? I haven’t tried it yet.
I also tried another model using LM Studio: Deepseek/deepseek-r1-0528-qwen3-8b.
The previous one, qwen3-4b, is way too dumb, so even though the translation runs through, the quality is terrible (but hey, at least it works!).
So I think for an average PC, the golden middle ground will be qwen-8b. However, after a while it stops translating. The translator feeds it in 50 batches, but after some time it starts throwing 40/50 errors, as if it can’t process one “chunk” of a batch or something like that—I’m not entirely sure.
Glad to hear you're testing out different models! Here is the situation on my end:Does it already work with the free DeepL API in the latest version? I haven’t tried it yet.
I also tried another model using LM Studio: Deepseek/deepseek-r1-0528-qwen3-8b.
The previous one, qwen3-4b, is way too dumb, so even though the translation runs through, the quality is terrible (but hey, at least it works!).
So I think for an average PC, the golden middle ground will be qwen-8b. However, after a while it stops translating. The translator feeds it in 50 batches, but after some time it starts throwing 40/50 errors, as if it can’t process one “chunk” of a batch or something like that—I’m not entirely sure.
I’m really sorry to hear about everything you’re going through. You mentioned your situation before, and I didn't want to press on it too much because it’s such a heavy topic, but I felt the need to reach out. It’s heartbreaking that you have to deal with power cuts and bombings while just trying to work on your projects. No one should have to go through that.Dude, I'd love to use local AI, but there are three problems:
1. There's a war going on in the country. We're being bombed. The electricity is cut off for 12 hours a day.
2. My laptop can't handle a powerful model, such as gemini-2b.
3. I'm translating into Russian, and not every model can do that; I need accurate prompt.
Actually, it might be better to focus on improving DeepL first, optimizing it as much as possible. AI is too expensive and not many people use it for that reason. On the other hand, a DeepL subscription isn’t expensive. And if someone wants to get a bit clever, they can create unlimited free accounts.Glad to hear you're testing out different models! Here is the situation on my end:
Regarding DeepL: I’m about to drop another update for it. It seems to have some trouble preserving tags. Honestly, testing it is a bit of a headache; I tried using the free API for testing, but I hit the character limit before I was even halfway through!By the way, that 500k limit sounds like a lot, but since it's characters and not words, it barely covers a demo game.
Local LLMs & the 'Stop' Issue: You touched on something that’s been bugging me too. I’ve noticed that some models (including APIs) just stop responding at some point, and I haven't figured out why yet. My testing has been a bit limited because my laptop's fans are acting up, and I didn't want to push the hardware too hard. I didn't see any specific errors during my brief tests, but I’ve tried some 'blind' optimizations anyway.
The Hard Truth: To be honest, I can’t guarantee that any engine other than Google Translate will work 100% perfectly, and I’m not sure I can give that promise for the future either. Chasing free API limits to debug things is exhausting to be fair, at this point, rebuilding the entire app sounds easier than dealing with these free-tier restrictions!
So, I don't want to overpromise on the AI/LLM side of things. That said, I’m not giving up; I’ll be 'consulting' with Claude Opus 4.5 and Gemini 3 Pro to see if they can help me spot why the batches are getting stuck. Hopefully, they’ll have some answers!![]()