Tool RPGM SLR - Offline JP to EN Translation for RPG Maker VX, VX Ace, MV, MZ, and Pictures

Shisaye

Engaged Member
Modder
Dec 29, 2017
3,516
6,225
676
Just to clarify my earlier message:

The 400 error I’m seeing with SLR Translator seems to come from the request payload format, not from JavaScript itself. Your `fetch` example works because it only sends the basic fields. The SDKs (and possibly SLR Translator) add extra fields like `chat_template_kwargs`, `extra_body`, or `response_format`, which NVIDIA’s endpoint appears to reject.

That matches what Topdod found in Translator++ — changing `response_format` from `'json_schema'` to `{ "type": "json_object" }` stopped the 400 errors. So the difference is really about which fields are included in the JSON body, rather than the language used.

If user use deepseek 3.1 terminus:
JavaScript:
import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: '$NVIDIA_API_KEY',
  baseURL: 'https://integrate.api.nvidia.com/v1',
})

async function main() {
  const completion = await openai.chat.completions.create({
    model: "deepseek-ai/deepseek-v3.1-terminus",
    messages: [{"role":"user","content":""}],
    temperature: 0.2,
    top_p: 0.7,
    max_tokens: 16384,
    chat_template_kwargs: {"thinking":true},
    stream: true
  })

  for await (const chunk of completion) {
        const reasoning = chunk.choices[0]?.delta?.reasoning_content;
    if (reasoning) process.stdout.write(reasoning);
        process.stdout.write(chunk.choices[0]?.delta?.content || '')

  }

}

main();
If user use llama-3.3-nemotron-super-49b-v1.5
JavaScript:
import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: '$NVIDIA_API_KEY',
  baseURL: 'https://integrate.api.nvidia.com/v1',
})

async function main() {
  const completion = await openai.chat.completions.create({
    model: "nvidia/llama-3.3-nemotron-super-49b-v1.5",
    messages: [{"role":"system","content":"/think"}],
    temperature: 0.6,
    top_p: 0.95,
    max_tokens: 65536,
    frequency_penalty: 0,
    presence_penalty: 0,
    stream: true,
  })

  for await (const chunk of completion) {
    process.stdout.write(chunk.choices[0]?.delta?.content || '')
  }

}

main();
Well, as I told you, I still don't have an api key, they still didn't respond at all, and if asked for a phone number I will tell them to go fuck themselves.
So why not just do it yourself and tell me if I should add that as a toggle?
Just change the 2 bits in the DSLREngine.js. (One for Fullbatch, one for single requests.)

I assume you mean from this:
JavaScript:
const response = await fetch(apiUrl, {
    method: 'POST',
    headers: Object.assign({
            'Content-Type': 'application/json'
        },
        apiKey ? {
            'Authorization': `Bearer ${apiKey}`
        } : {}
    ),
    body: JSON.stringify({
        model: activeLLM,
        messages: messages,
        temperature: llmTemp,
        top_k: llmTopK,
        top_p: llmTopP,
        min_p: llmMinP,
        repeat_penalty: llmRepeatPen,
        max_tokens: maxTokenValue
    })
});
to this:
JavaScript:
const response = await fetch(apiUrl, {
    method: 'POST',
    headers: Object.assign({
            'Content-Type': 'application/json'
        },
        apiKey ? {
            'Authorization': `Bearer ${apiKey}`
        } : {}
    ),
    body: JSON.stringify({
        model: activeLLM,
        messages: messages,
        temperature: llmTemp,
        top_k: llmTopK,
        top_p: llmTopP,
        min_p: llmMinP,
        repeat_penalty: llmRepeatPen,
        max_tokens: maxTokenValue,
        response_format: { "type": "json_object" }  // Add this line
    })
});
 

ripno

Member
Jan 27, 2023
135
337
131
Hi Shisaye,
after discussing the issue with Gemini 3 Pro, I learned that the problem comes from parameter mismatches. NVIDIA’s API follows the strict OpenAI specification, so parameters like min_p, repeat_penalty, and sometimes top_k are not recognized and cause the request to fail.

With Gemini 3 Pro’s help, I tested a modified version of your code where those parameters are removed (or replaced with frequency_penalty / presence_penalty if needed), and it worked correctly with NVIDIA models. I also added a small logic check so that if the API URL contains , the translator automatically adjusts the parameters to match NVIDIA’s requirements.

Please note that I’m not a coding expert—this is just my observation and testing. If I misunderstood something, I hope you’ll understand that I only tried to analyze the issue and share what worked for me.
I also hope that these adjustments can be considered and possibly included in a future version of SLR Translator.

For your reference, I’ve attached my modified DSLREngine.js file so you can see the changes directly.
The password is Shisaye
 
Last edited:

Shisaye

Engaged Member
Modder
Dec 29, 2017
3,516
6,225
676
Hi Shisaye,
after discussing the issue with Gemini 3 Pro, I learned that the problem comes from parameter mismatches. NVIDIA’s API follows the strict OpenAI specification, so parameters like min_p, repeat_penalty, and sometimes top_k are not recognized and cause the request to fail.

With Gemini 3 Pro’s help, I tested a modified version of your code where those parameters are removed (or replaced with frequency_penalty / presence_penalty if needed), and it worked correctly with NVIDIA models. I also added a small logic check so that if the API URL contains , the translator automatically adjusts the parameters to match NVIDIA’s requirements.

Please note that I’m not a coding expert—this is just my observation and testing. If I misunderstood something, I hope you’ll understand that I only tried to analyze the issue and share what worked for me.
I also hope that these adjustments can be considered and possibly included in a future version of SLR Translator.

For your reference, I’ve attached my modified DSLREngine.js file so you can see the changes directly.
The password is 123
I'm not a fan of only supporting a single api like that I want DSLR to be universal.
How about if I simply make all the parameters optional and add those 2 new ones?
So that if you leave the option blank in the options menu the parameter is not added to the request?
 

ripno

Member
Jan 27, 2023
135
337
131
I'm not a fan of only supporting a single api like that I want DSLR to be universal.
How about if I simply make all the parameters optional and add those 2 new ones?
So that if you leave the option blank in the options menu the parameter is not added to the request?


Sure, go ahead. Making the parameters optional sounds like a good idea to keep DSLR universal.
 

Shisaye

Engaged Member
Modder
Dec 29, 2017
3,516
6,225
676
I've released v2.030.
Most notable change: frequency_penalty, presence_penalty, stream, and chat_template_kwargs parameter options to DSLR and parameters are optional now.
(If you leave them blank or use 69 if they are a number, then they will not be sent to the endpoint at all.)


Changes since last changelog post:

2.030
Added frequency_penalty, presence_penalty, stream, and chat_template_kwargs options to DSLR.
Added prompt options for fallback model.
Parameters that are set to either be empty or 69 will no longer be sent to the endpoint.

2.029
When talking to the LLM about text commands DSLR will now use the correct capitalization of the original.
Changed DSLR default settings.
Changed DSLR documentation.
Changed DSLR FullBatch max_tokens calculation to be sustantially lower.
Added better error handling for '400 Bad Request' server errors.

2.028
Added more DSLR options for the fallback model. Fixed the existing ones.
Added adjustments to Temperature and Top K parameters during the last attempts.
If a batch would fail completely because even the fallback model ran out of tries, or it's not even enabled, then it will now accept the translation anyway, but add SHISAYE.FAILEDTRANSLATION into it.
Changed default max tokens to 4000.

2.027
Added FallBack LLM options to DSLR.
Removed outdated information from the documentation.
 
  • Heart
Reactions: chiuaquang

ChaosXD

Member
May 21, 2017
321
145
213
Is it normal to get this "SLRContextIDActors,1,nameSLRContextID" next to the translation? (e.g. Mikio            SLRContextIDActors,2,nameSLRContextID). I am using Sugoi_Toolkit_V15.
 

Shisaye

Engaged Member
Modder
Dec 29, 2017
3,516
6,225
676
Is it normal to get this "SLRContextIDActors,1,nameSLRContextID" next to the translation? (e.g. Mikio            SLRContextIDActors,2,nameSLRContextID). I am using Sugoi_Toolkit_V15.
That will automatically be removed if you press the "Fix Cells and Check For Errors" button. Even if you forget to do that it will be removed during export and not show up in the resulting files.

It's an issue with the T++'s UI this is using, for some reason it refuses to show identical entries of the same object.
To force it to still show everything I've just added the context of the cell at the end, because the context is "almost" always unique.
 
  • Like
Reactions: ChaosXD

Shisaye

Engaged Member
Modder
Dec 29, 2017
3,516
6,225
676
I've released v2.032.
To further address the weak hardware DSLR problem I've implemented a new option to limit the fallback attempts to 5 single requests.
Meaning instead of retrying the entire batch it will really only use the fallback model for the current failed translation.
That makes using large models with weak hardware much more viable because it will at worst only make 5 relatively quick attempts and not waste 2 hours.

But that new option would be a terrible idea for a fallback model using some kind of free limited requests/tokens, because it will spam small inefficient requests.
It's only a good idea for something unlimited and free. (If you host the model yourself.)


Changes since last changelog post:

2.032
Fixed it not sending the starter of the context prompt in DSLR.
Changed single request retries to 10 instead of 12.
Added new "Limit Fallback to Single Requests" option.
When enabled it will try 5 more times with the fallback model during single requests instead of the fallback model trying the entire batch again.
It will not retry the batch if those 5 attempts fail.

2.031
Implemented the DSLR endpoint parameter option changes for SEP.
Fixed some wrong text.
 

Shisaye

Engaged Member
Modder
Dec 29, 2017
3,516
6,225
676
Some clarification regarding the latest update.

The "Prepare for Batch Translation" button only needs to be pressed once after creating a new project. You do not need to press it again, it wont do anything negative, but nothing positive either.

The new system determines whether or not you ever pressed the button by a small addition to the cache files and will press the button for you if you try to start a batch translation without it.
That means on old projects it will assume that you've never pressed it until it placed the new information, which is a bit annoying, but wont actually do anything negative to your project (but waste your time).
You can turn this whole deal off in the options menu.

Was this really necessary? Apparently so. Some people will rather take the effort of writing a long message shitting on a project, than read some basic instructions.
RTFM.jpg
 

Shisaye

Engaged Member
Modder
Dec 29, 2017
3,516
6,225
676
I've made a translation for RJ281598 using DeepSeekV3.2EXP as a test because that's weirdly enough currently the cheapest model, but I can't really recommend it.
When it worked it worked good, but in less than 200k characters it shit the bed 6 times and just replied with "A" or "/" when asked for a full batch.
Not a horrible error to get since obviously it wont really bill you a lot of output tokens for that, but DeepSeekV3-0324 never did that.

I did not test the proper 3.2, yet. (Came out today.)
I also never tried 3.1 Terminus, yet. The normal 3.1 is pretty bad.
 

Shisaye

Engaged Member
Modder
Dec 29, 2017
3,516
6,225
676
Finished tests with basically all DeepSeek versions now.
My Impressions are as follows:

DeepSeekV3 - Decent - Not great not terrible.

DeepSeekV3-0324: Good - Follows instructions very well.

DeepSeekV3.1: Bad - Does not follow instructions, constantly screws up

DeepSeekV3.1-Terminus: Good - Basically the same as V3-0324

DeepSeekV3.2-EXP: Bad - Hangs up spamming the same letter quite often

DeepSeekV3.2: Not great - Hangs up less than 3.2EXP, but it still happens

I did not test 3.1-Exacto or 3.2-Special because they are reasoning models with really weird output, that is currently not supported by DSLR.

Translation quality for all of them is pretty bland/sterile and it prefers to take things in a non lewd way, but the tests were done on 0.1 temperature to give it the best chance to follow instructions, so that's not particularly surprising.


TLDR:
I still only really recommend V3-0324, which is also the model I've tested the most, but if you have a better provider for V3.1-Terminus then go with that, since there doesn't seem to be a whole lot of difference.
 
Last edited:
  • Like
Reactions: iYuma

Shisaye

Engaged Member
Modder
Dec 29, 2017
3,516
6,225
676
Bad news. Seems like OpenRouter no longer offers free requests to premium models. You can now only get free requests for stuff you could host yourself or is pretty shit.
No DeepSeekV3.X at all anymore either.

As a result OpenRouter is now basically worthless for DSLR, unless you're planning to pay money.
And even that is a bit "meh" because paying a provider directly is probably cheaper, and OpenRouter credits expire after a year, which is a bit bullshit.
 

CodedGamer

New Member
May 24, 2020
10
0
142
Please send me the .trans of that project, the matching staging data from www>php>cache, and the name of the game and its version.
The game is NTRファンタジー -運命の花冠と青光の宝石- SIDE フォルスガーランド
 

Shisaye

Engaged Member
Modder
Dec 29, 2017
3,516
6,225
676
Its on 2.039.
When I try to export CommonEvents.json I get this error:
That Commonevents has to be one of the weirdest I've ever seen.
I don't think that was made with the official MZ editor.

I've changed the core MZ plugin parsing for Commonevents to make it no longer crash, which I'm definitely not going to regret, and I'm not even sure if it exports correctly now, because I'm not sure what those bits are even supposed to do.

Please try again with v2.040.
 
  • Like
Reactions: CodedGamer