Decided to pay for the Translator++ dev's Patreon after seeing it was only $2 to download the latest version (happy to share as long as someone can tell me how to make sure it's completely disconnected from my Patreon account) when I saw that the plugin for LLM connectivity needed a newer version than the one shared here. Come to find out, you also apparently need 1000 "points" to download the plugin. Which basically means you have to pay $10 on top of what you already paid to download the plugin. It wasn't disclosed anywhere that what I would consider to be a core function, especially nowadays, is locked behind an additional paywall. Scummy as fuck in my opinion. If anyone knows any kind of bypass for this, please let me know. This really pissed me off.
I did try the free alternative I found on GitHub, but I couldn't get it to work with the Deepseek API I was trying to use. Perhaps I was doing something wrong, but since there's basically no documentation on it, it's basically impossible to know for sure. If anyone can help me with that at all, please share.
This version also had an extension that used the computer's power to translate, thanks to the Google Nano extension. Does the shared program have this extension?
Decided to pay for the Translator++ dev's Patreon after seeing it was only $2 to download the latest version (happy to share as long as someone can tell me how to make sure it's completely disconnected from my Patreon account) when I saw that the plugin for LLM connectivity needed a newer version than the one shared here. Come to find out, you also apparently need 1000 "points" to download the plugin. Which basically means you have to pay $10 on top of what you already paid to download the plugin. It wasn't disclosed anywhere that what I would consider to be a core function, especially nowadays, is locked behind an additional paywall. Scummy as fuck in my opinion. If anyone knows any kind of bypass for this, please let me know. This really pissed me off.
I did try the free alternative I found on GitHub, but I couldn't get it to work with the Deepseek API I was trying to use. Perhaps I was doing something wrong, but since there's basically no documentation on it, it's basically impossible to know for sure. If anyone can help me with that at all, please share.
I think I know the one you're talking about. This one?
You must be registered to see the links
I spent a while trying to figure this out, and I think I got it. So, settings for DeepSeek
You must be registered to see the links
deepseek-chat
I then tried running a batch translate on a small file because it actually shows errors
HTTP Status : 500
Error Type : Error while fetching: Error: 400 This response_format type is unavailable now
DeepSeek isn't happy with the format? Maybe an issue with the addon then?
So, in one final effort I did some Googling to see what DeepSeek expects, closed Translator++ and edited the opanai.js file (www/addons/openai/openai.js)
ctrl +F looked for "response_format", it was on line 60 in Notepad++
response_format: zodResponseFormat(z.object(getResponseSchema(texts.length)), 'json_schema'),
Edit it to
response_format: {'type': 'json_object'}
Tested a few single lines and now I get translations. What I don't get, is that OpenRouter is able to use DeepSeek just fine without the edit. I gave a quick test trying OpenRouter with another model to see if it worked. google/gemini-2.5-flash-preview-09-2025
Still translated the 2 lines I tested. But for all I know it could fuck with other models, so I'd keep 2 versions of the file just in case. This was all 100% dumb guess work. If anybody knows more, please go ahead and tell us.
tl;dr:
open www/addons/openai/openai.js (back it up)
ctrl +F look for "response_format", it was on line 60 in Notepad++
response_format: zodResponseFormat(z.object(getResponseSchema(texts.length)), 'json_schema'),
Edit it to
response_format: {'type': 'json_object'}
Open Translator++
I think I know the one you're talking about. This one?
You must be registered to see the links
I spent a while trying to figure this out, and I think I got it. So, settings for DeepSeek
You must be registered to see the links
deepseek-chat
I then tried running a batch translate on a small file because it actually shows errors
HTTP Status : 500
Error Type : Error while fetching: Error: 400 This response_format type is unavailable now
DeepSeek isn't happy with the format? Maybe an issue with the addon then?
So, in one final effort I did some Googling to see what DeepSeek expects, closed Translator++ and edited the opanai.js file (www/addons/openai/openai.js)
ctrl +F looked for "response_format", it was on line 60 in Notepad++
response_format: zodResponseFormat(z.object(getResponseSchema(texts.length)), 'json_schema'),
Edit it to
response_format: {'type': 'json_object'}
Tested a few single lines and now I get translations. What I don't get, is that OpenRouter is able to use DeepSeek just fine without the edit. I gave a quick test trying OpenRouter with another model to see if it worked. google/gemini-2.5-flash-preview-09-2025
Still translated the 2 lines I tested. But for all I know it could fuck with other models, so I'd keep 2 versions of the file just in case. This was all 100% dumb guess work. If anybody knows more, please go ahead and tell us.
tl;dr:
open www/addons/openai/openai.js (back it up)
ctrl +F look for "response_format", it was on line 60 in Notepad++
response_format: zodResponseFormat(z.object(getResponseSchema(texts.length)), 'json_schema'),
Edit it to
response_format: {'type': 'json_object'}
Open Translator++
If you want to get rid of them, log out and block the site.
I use Nextdns. I add the following sites to the Denylist: *.dreamsavior.net and *.update.dreamsavior.net.
It's been working for six months now.
Now I'll try making my own build.
Yes, you can use the openai-addon addon for free in two cases:
1. I couldn't get it to work correctly using G4F.
2. Using local ai, I got it working with
You must be registered to see the links
and a model specifically for translation "zongwei/gemma3-translator:1b" using my computer's processing power. This has been tested and works, but it took me 20 hours to set it up.
It requires a powerful processor and a lot of memory, all on Windows 11.
But it’s difficult to translate a lot; the i5-1235U processor is too slow.
Translator++ version 7.10.29 is my build with the add-ons I need, bought for a couple of bucks.
Sorry, English is not my language, I am writing through a translator.
Your Patreon account data isn’t stored inside the app itself — it’s located in the AppData\Local\translator++ folder.
Also, the “trans.js” file in the shared versions here is replaced with the public version.
If it’s not replaced, nobody will be able to use it.
my translation contains § character instead of line break, i wonder which version will fix this? i replaced all § with "spaces" but that makes the line very long
Translator++ & LinguaGacha Combo – Free AI Game Translation Guide
Yes, you can access AI for free. Simply register, verify your account, and generate an API key from the following site:
After clicking “Generate API Key”, make sure to copy the key immediately from "api_key =". You will not be able to retrieve it again once the page is closed. Even though your account shows a list of active keys, they are hidden behind ***. If you lose or forget your key, you must generate a new one. I am not sure how many active API keys are allowed at once.
Token Usage
Maximum tokens = [tokens sent (prompt + translation text) + tokens used for reasoning (if enabled & supported by certain AI) + AI response tokens].
I have grouped different AIs by their maximum token limits based on my own testing.
Using LinguaGacha
To simplify automatic translation with AI, I use LinguaGacha, which you can download here (104 MB):
You must be registered to see the links
Basic workflow: Translator++ → Japanese text.xlsx → LinguaGacha → English translation.xlsx → Translator++
Exporting from Translator++
Example: Translating an RPGM MV project from Japanese to English. I created a folder and set the exported .xlsx files to: C:\Users\%USERNAME%\Desktop\my game translation project\raw
With your translation project open, click the Translator++ logo (top right) → Export Project → To XLSX File. Select your destination folder and click Select Folder.
Right click and select Edit Arguments to adjust parameters like top_p, temperature, presence penalty, and frequency penalty. Only enable those supported by the chosen AI.
Example: DeepSeek 3.1 Terminus does not support presence/frequency penalty → leave disabled.
Glossary
This ensures consistent translation of specific Japanese terms into English. Very useful for technical or game-specific terminology. Don’t forget to enable the glossary switch if filled.
Custom Prompts
Although default prompts exist, advanced users should customize them for each game. A good prompt helps maintain tone, cultural nuance, and consistency.
Use {source_language} and {target_language} in prompts for flexibility.
Always include example translations—AI understands better with examples.
If you cancel, click Stop and wait until LinguaGacha fully halts. Closing the app prematurely will discard progress.
To resume, click Continue Task (only works if the app remains open).
Be careful: clicking Start instead of Continue Task will overwrite previous progress.
⚠ Reminder: LinguaGacha does not automatically back up or create new folders for each session. You must manually secure your output folder contents before starting a new translation.
Importing Back into Translator++
You can import all or selected files into specific columns (Initial, Machine Translation, Better Translation, Best Translation).
Example: Importing CommonEvents.xlsx
With your project open, click Translator++ logo → Import → Translation Spreadsheet.
In Import Spreadsheet Options, select Import File(s). Navigate to: C:\Users\%USERNAME%\Desktop\my game translation project\output\data Choose CommonEvents.xlsx → click Open. Select the destination column and check Overwrite Target only if you want to replace existing translations. Click Import.
Done! Now carefully review the imported translations.
Previously I used LinguaGacha + Google api but after only 1 minute a warning appeared that the limit was reached, I used the smallest model that is said to be free gemini flash 2.0 lite. With this api tutorial, do I need an NVIDIA graphics card? I'm using an AMD card
Previously I used LinguaGacha + Google api but after only 1 minute a warning appeared that the limit was reached, I used the smallest model that is said to be free gemini flash 2.0 lite. With this api tutorial, do I need an NVIDIA graphics card? I'm using an AMD card
You don’t need an NVIDIA GPU to use NVIDIA’s AI models via API.
When you access models like Llama 3.3 or DeepSeek through NVIDIA’s API or cloud services, all the processing happens on NVIDIA’s servers. Your device just sends requests and receives responses, so it works fine even with an AMD GPU or no GPU at all.
build directly with TheExorcist- so he can take a look and see if it can be adjusted to work without linking to Patreon. That way, everyone might benefit from a smoother setup.
Here's my build, using renpy for translation. All unnecessary junk for translating other engines has been removed. Everything unnecessary has been removed, and everything that might be needed has been archived.
Use it, 2 bucks is not a lot of money, even for Ukraine.
Works without a Patreon link or accounts.
If you don't want to be detected, use
You must be registered to see the links
or any other blocking tool.
If anyone is interested, I can try to explain how to set up the OpenAI addon on a local AI, but it's better to ask copilot or gemini
Here's my build, using renpy for translation. All unnecessary junk for translating other engines has been removed. Everything unnecessary has been removed, and everything that might be needed has been archived.
Use it, 2 bucks is not a lot of money, even for Ukraine.
Works without a Patreon link or accounts.
If you don't want to be detected, use
You must be registered to see the links
or any other blocking tool.
If anyone is interested, I can try to explain how to set up the OpenAI addon on a local AI, but it's better to ask copilot or gemini
Hello everyone, I’d like to share some Translator++ add-ons here. Please note that some of them are still older versions, but hopefully they can still be useful for your projects. Feel free to check them out!
If anyone has any new add-ons, I hope you can share them here.
If you want to get rid of them, log out and block the site.
I use Nextdns. I add the following sites to the Denylist: *.dreamsavior.net and *.update.dreamsavior.net.
It's been working for six months now.
Now I'll try making my own build.
Thank you very much for the T++ and the info about NextDNS, because their DNS servers seem to be fast. You can also block those domains by adding the addresses you mentioned to the Windows hosts.ics file