Tool Translator++ 4.9.28 Standard Version / Developer Version

5.00 star(s) 2 Votes
Oct 29, 2024
259
133
Hello, my friends!, could you share some places that offer free provider APIs?
T++ DeepL plugin acting up, and AI models give much better translations these days.
If you want to know more.
I'll start by sharing some popular free providers I'm using now, along with how many free calls you get per day:
(250/day)
(200/day)
(100/day)
(50/day)
Chutes used to be unlimited, but then one day it changed – now it's just 200/day.
So, friends, can you share any more generous providers?
If you're willing to share, I'd really appreciate it.
Have you tried these 3? I'm not too well know on AI Models these days



ProviderFree QuotaNotes
LibreTranslateUnlimited (self-hosted)Free and open-source. Some public instances available.
Lingva TranslateUnlimited (frontend)A privacy-focused frontend for Google Translate.
Argos TranslateUnlimited (offline)Based on OpenNMT. Installable desktop/CLI version.
OpenTDB + FreeTTS API (Speech)~100/dayFor text-to-speech if you want multilingual audio.
Translate-Server (via HuggingFace)~100/dayBased on MarianMT/MBart models.
 

ripno

Newbie
Jan 27, 2023
68
107
Hello, my friends!, could you share some places that offer free provider APIs?
T++ DeepL plugin acting up, and AI models give much better translations these days.
If you want to know more.
I'll start by sharing some popular free providers I'm using now, along with how many free calls you get per day:
(250/day)
(200/day)
(100/day)
(50/day)
Chutes used to be unlimited, but then one day it changed – now it's just 200/day.
So, friends, can you share any more generous providers?
If you're willing to share, I'd really appreciate it.
A call refers to one request made to the AI server. Each time you send a message or prompt for the AI to process — whether it's a sentence, paragraph, or a block of text — that counts as one call.

For example, if you send one sentence to be translated, that's one API call. If you send 100 sentences in one batch, it's still just one call. That’s why batching multiple lines into a single request is often more efficient, especially when working within daily call limits.

Regarding your usage of the free API calls, it is very likely that your free quota is used up quickly because each API call processes one sentence at a time. If you translate one sentence per call, the number of calls can add up fast, especially if you translate large amounts of text.

To optimize your usage and avoid running out of free calls too quickly, you might consider sending multiple sentences in a single API call if the service supports it. This way, you reduce the number of calls while still getting all your translations done.

For comparison, I usually send up to 200 lines of text in a single request. So even when translating thousands of lines from a visual novel, the quota has been sufficient for me.

You might consider batching more text per call if the API allows it — it can really help optimize usage.
 
Last edited:

wsg123

Newbie
Mar 4, 2022
22
3
Have you tried these 3? I'm not too well know on AI Models these days



ProviderFree QuotaNotes
LibreTranslateUnlimited (self-hosted)Free and open-source. Some public instances available.
Lingva TranslateUnlimited (frontend)A privacy-focused frontend for Google Translate.
Argos TranslateUnlimited (offline)Based on OpenNMT. Installable desktop/CLI version.
OpenTDB + FreeTTS API (Speech)~100/dayFor text-to-speech if you want multilingual audio.
Translate-Server (via HuggingFace)~100/dayBased on MarianMT/MBart models.
Thanks for sharing. The services you recommended all require manual copying from a webpage.
What we really need is a translation method like OpenAI's – something that works without a server or webpage. Just a method that only requires an API key to call.
If I can't find the right solution, I'll give these a shot,
Or wait for the next day.
 
Oct 29, 2024
259
133
Thanks for sharing. The services you recommended all require manual copying from a webpage.
What we really need is a translation method like OpenAI's – something that works without a server or webpage. Just a method that only requires an API key to call.
If I can't find the right solution, I'll give these a shot,
Or wait for the next day.
Join me on the server then I think I know some guys who knows alot of these more than me though
 

wsg123

Newbie
Mar 4, 2022
22
3
A call refers to one request made to the AI server. Each time you send a message or prompt for the AI to process — whether it's a sentence, paragraph, or a block of text — that counts as one call.

For example, if you send one sentence to be translated, that's one API call. If you send 100 sentences in one batch, it's still just one call. That’s why batching multiple lines into a single request is often more efficient, especially when working within daily call limits.

Regarding your usage of the free API calls, it is very likely that your free quota is used up quickly because each API call processes one sentence at a time. If you translate one sentence per call, the number of calls can add up fast, especially if you translate large amounts of text.

To optimize your usage and avoid running out of free calls too quickly, you might consider sending multiple sentences in a single API call if the service supports it. This way, you reduce the number of calls while still getting all your translations done.

For comparison, I usually send up to 200 lines of text in a single request. So even when translating thousands of lines from a visual novel, the quota has been sufficient for me.

You might consider batching more text per call if the API allows it — it can really help optimize usage.
I know how these API calls work. We submit text in batches, like chunks of 20 lines at a time.
By the way, each provider also has a daily token limit. Both input and output count toward this limit.

Just to give you an idea:
If we count one English letter as one token, a provider with a 200-token limit could handle about 5,000 English letters per day.
But for a game? A single update version can easily have over 10,000 letters.
 

ripno

Newbie
Jan 27, 2023
68
107
I know how these API calls work. We submit text in batches, like chunks of 20 lines at a time.
By the way, each provider also has a daily token limit. Both input and output count toward this limit.

Just to give you an idea:
If we count one English letter as one token, a provider with a 200-token limit could handle about 5,000 English letters per day.
But for a game? A single update version can easily have over 10,000 letters.
Just to share my experience — I’ve been using Chutes.ai quite a bit, and so far, I haven’t hit any token/day limit.
For example, the Tyrano game I’m currently translating has 3,113 lines of texts, with a total of 122,515 characters. I was able to process all of that through multiple requests without any issues.
The AI translation part alone was completed in less than an hour, which I found really efficient.

Chutes.ai does apply daily token limits, though they don’t publish specific numbers openly. However, since Chutes uses a serverless AI backend—similar to platforms like Fireworks.ai—we can reasonably infer their limits follow the same structure.

According to Fireworks.ai documentation, the daily token limits are based on the size of the model:
Model Size                 Estimated Daily Token Limit
< 40B parameters              ~2.5 billion tokens/day
40B–100B parameters           ~1.25 billion tokens/day
> 100B parameters (e.g. DeepSeek-R1)  ~600 million tokens/day

Chutes.ai supports large models like DeepSeek-R1 and LLaMA variants, so it's likely they follow these same quotas. If you prompt a large model with, say, 1,000 tokens per request (input + output), you'd be able to run ~600,000 generations per day with a 100B+ model before hitting the cap.
I think the free plan is actually quite generous for text-based projects like this.
 
Last edited:
  • Like
Reactions: Garter2269

jaden_yuki

Well-Known Member
Jul 11, 2017
1,061
898
Hello, my friends!, could you share some places that offer free provider APIs?
T++ DeepL plugin acting up, and AI models give much better translations these days.
If you want to know more.
I'll start by sharing some popular free providers I'm using now, along with how many free calls you get per day:
(250/day)
(200/day)
(100/day)
(50/day)
Chutes used to be unlimited, but then one day it changed – now it's just 200/day.
So, friends, can you share any more generous providers?
If you're willing to share, I'd really appreciate it.
I already knew that chutes would impose a limit someday, but openrouter surprised me. Did they go down from 200/day to 50/day? I'm disappointed
 

jaden_yuki

Well-Known Member
Jul 11, 2017
1,061
898
A call refers to one request made to the AI server. Each time you send a message or prompt for the AI to process — whether it's a sentence, paragraph, or a block of text — that counts as one call.

For example, if you send one sentence to be translated, that's one API call. If you send 100 sentences in one batch, it's still just one call. That’s why batching multiple lines into a single request is often more efficient, especially when working within daily call limits.

Regarding your usage of the free API calls, it is very likely that your free quota is used up quickly because each API call processes one sentence at a time. If you translate one sentence per call, the number of calls can add up fast, especially if you translate large amounts of text.

To optimize your usage and avoid running out of free calls too quickly, you might consider sending multiple sentences in a single API call if the service supports it. This way, you reduce the number of calls while still getting all your translations done.

For comparison, I usually send up to 200 lines of text in a single request. So even when translating thousands of lines from a visual novel, the quota has been sufficient for me.

You might consider batching more text per call if the API allows it — it can really help optimize usage.
the more lines you send in a single request, the worse the quality of the translation
 

archedio

Well-Known Member
Modder
Apr 18, 2019
1,249
4,003
Chutes used to be unlimited, but then one day it changed – now it's just 200/day.
So, friends, can you share any more generous providers?
If you're willing to share, I'd really appreciate it.
Wait, holy... chutes isn't unlimited anymore? This is the first I'm hearing about it. What a massive shame...
Though to be fair, I only really use it for DeepSeek-V3-0324 anyway.
 
Last edited:

ripno

Newbie
Jan 27, 2023
68
107
the more lines you send in a single request, the worse the quality of the translation

I understand the concern, but that’s **not entirely accurate**—especially when using a proper setup like a **Python script** and a capable model like **DeepSeek**.

### Here’s what you need to know:

1. **Translation quality doesn’t necessarily degrade with longer text**, as long as the text fits within the model’s token limit. Models like **DeepSeek** (including other AIs with big parameters) are fully capable of handling long inputs accurately.

2. I’m using a **Python script** that:
- Sends **hundreds of lines of text** in one request
- Structures prompts clearly
- Ensures the output is **fully received**, without getting cut off mid-way — meaning you get a **complete response in one go**, without having to ask for "continue" or follow-up chunks
- Bypasses browser limitations like output truncation or “Continue” prompts

3. Because the full text is processed in one batch, the model:
- Maintains **context**
- Keeps **style and terminology consistent**
- Produces **clean, high-quality translations**, even with long content

The downside is that you need to manually copy the original text and paste it into the Python script, then after the translation is received, you have to copy the translated text and save it separately. This process isn’t fully automated but helps maintain translation quality for long texts.

4. I’m currently using **DeepSeek models** via the **chutes.ai server**, which hosts high-performance open-source models.

5. I’m aware of the newly released **DeepSeek-TNG-R1T2-Chimera** (July 2025), which combines the strengths of several DeepSeek variants. I haven’t used it yet, but it's worth checking out if you're into model testing and optimization.


### TL;DR

- **It’s not about how long the text is**, it’s how you send it and which model you use.
- With a proper script and models like **DeepSeek**, long texts can be translated **consistently and accurately**.
- If users are only using browser-based interfaces and relying on "Continue" commands, **inconsistencies are expected**.
 
Last edited:

jaden_yuki

Well-Known Member
Jul 11, 2017
1,061
898
The downside is that you need to manually copy the original text and paste it into the Python script, then after the translation is received, you have to copy the translated text and save it separately. This process isn’t fully automated but helps maintain translation quality for long texts.
not really: (for translator++)
but the best one is:
 

ripno

Newbie
Jan 27, 2023
68
107
not really: (for translator++)
but the best one is:
I’ll give LinguaGacha a try. It seems promising for streamlining the translation workflow.

By the way, could you share a complete guide or setup instructions for the GPT4Free addon for Translator++? Including the parameters that need to be filled.
I’d really appreciate a step-by-step explanation, since I’m interested in testing it out for my workflow.
 
Last edited:

wsg123

Newbie
Mar 4, 2022
22
3
I’ll give LinguaGacha a try. It seems promising for streamlining the translation workflow.

By the way, could you share a complete guide or setup instructions for the GPT4Free addon for Translator++? Including the parameters that need to be filled.
I’d really appreciate a step-by-step explanation, since I’m interested in testing it out for my workflow.
I tried LinguaGacha, but it still seems to require an API KEY to work.
So the question comes back to service providers – where can we find one that offers really generous usage limits?

LinguaGacha does have a local interface, but you'd need at least an 8GB VRAM graphics card to get decent translation quality.
Plus, when translating, it might take up over 90% of your GPU, so you can't do much else on your PC besides LinguaGacha.

About that GPT4Free addon you mentioned – is that the plugin for GPT4-all?
If so, that plugin also uses a local interface. You can find details in T++'s official documentation.
 

Hero_Protagonist

New Member
Apr 6, 2024
5
1
I understand the concern, but that’s **not entirely accurate**—especially when using a proper setup like a **Python script** and a capable model like **DeepSeek**.

### Here’s what you need to know:

1. **Translation quality doesn’t necessarily degrade with longer text**, as long as the text fits within the model’s token limit. Models like **DeepSeek** (including other AIs with big parameters) are fully capable of handling long inputs accurately.

2. I’m using a **Python script** that:
- Sends **hundreds of lines of text** in one request
- Structures prompts clearly
- Ensures the output is **fully received**, without getting cut off mid-way — meaning you get a **complete response in one go**, without having to ask for "continue" or follow-up chunks
- Bypasses browser limitations like output truncation or “Continue” prompts

3. Because the full text is processed in one batch, the model:
- Maintains **context**
- Keeps **style and terminology consistent**
- Produces **clean, high-quality translations**, even with long content

The downside is that you need to manually copy the original text and paste it into the Python script, then after the translation is received, you have to copy the translated text and save it separately. This process isn’t fully automated but helps maintain translation quality for long texts.

4. I’m currently using **DeepSeek models** via the **chutes.ai server**, which hosts high-performance open-source models.

5. I’m aware of the newly released **DeepSeek-TNG-R1T2-Chimera** (July 2025), which combines the strengths of several DeepSeek variants. I haven’t used it yet, but it's worth checking out if you're into model testing and optimization.


### TL;DR

- **It’s not about how long the text is**, it’s how you send it and which model you use.
- With a proper script and models like **DeepSeek**, long texts can be translated **consistently and accurately**.
- If users are only using browser-based interfaces and relying on "Continue" commands, **inconsistencies are expected**.
Can you elaborate on how you send **hundreds of lines of text** in one request. I tried it using my local llm and it was very hard to tune. And how to get response in json format for example with translation result for each sent line. I tried to specifically tune it to get response in json format but lm couldn't format it right
 
Last edited:

ripno

Newbie
Jan 27, 2023
68
107
Can you elaborate on how you send **hundreds of lines of text** in one request. I tried it using my local llm and it was very hard to tune. And how to get response in json format for example with translation result for each sent line. I tried to specifically tune it to get response in json format but lm couldn't format it right
Just to clarify how I’m working: I don’t use batch systems like Translator++. I use a Python script that mimics chat-like behavior—but instead of sending requests one by one, I make requests directly to chutes.ai through the script (not a local LLM), submitting larger segments of text in one go.

Unlike browser chat UIs that enforce token limits and often cut off responses mid-output—requiring users to type something like “continue from [specific text]”—my script avoids that problem entirely.

Since the request goes through Python and not a browser, I’m able to send far more tokens in a single call, and the model receives everything as one complete context. That results in a full response—clean and continuous—without needing any follow-up prompts.

I use this method to translate Tyrano-engine visual novels, but this doesn’t mean it’s an automated batch workflow like in Translator++. I still manually extract the script using a Python script, feed it into the request, then manually handle the output before injecting it back via the script again.

It’s not fully automated, but this semi-manual workflow gives me much better control over translation consistency, terminology, and context—especially when working with long-form VN scripts.
 
Last edited:

Hero_Protagonist

New Member
Apr 6, 2024
5
1
Just to clarify how I’m working: I don’t use batch systems like Translator++. I use a Python script that mimics chat-like behavior—but instead of sending requests one by one, I make requests directly to chutes.ai through the script (not a local LLM), submitting larger segments of text in one go.

Unlike browser chat UIs that enforce token limits and often cut off responses mid-output—requiring users to type something like “continue from [specific text]”—my script avoids that problem entirely.

Since the request goes through Python and not a browser, I’m able to send far more tokens in a single call, and the model receives everything as one complete context. That results in a full response—clean and continuous—without needing any follow-up prompts.

I use this method to translate Tyrano-engine visual novels, but this doesn’t mean it’s an automated batch workflow like in Translator++. I still manually extract the script using a Python script, feed it into the request, then manually handle the output before injecting it back via the script again.

It’s not fully automated, but this semi-manual workflow gives me much better control over translation consistency, terminology, and context—especially when working with long-form VN scripts.
I am also using python script for sending requests. I use T++ to extract csv files and then use python script to translate them. But i do it line by line from all csvs. I am interested in details how to setup llm to translate a bunch of strings instead of line by line. Maybe my local 8b lm is too weak for that and if i try deepseek from chutes.ai it ll work better
 
Last edited:
5.00 star(s) 2 Votes