Tool RPGM SLR - Offline JP to EN Translation for RPG Maker VX, VX Ace, MV, MZ, and Pictures

Shisaye

Engaged Member
Modder
Dec 29, 2017
3,492
6,194
676
Just to clarify my earlier message:

The 400 error I’m seeing with SLR Translator seems to come from the request payload format, not from JavaScript itself. Your `fetch` example works because it only sends the basic fields. The SDKs (and possibly SLR Translator) add extra fields like `chat_template_kwargs`, `extra_body`, or `response_format`, which NVIDIA’s endpoint appears to reject.

That matches what Topdod found in Translator++ — changing `response_format` from `'json_schema'` to `{ "type": "json_object" }` stopped the 400 errors. So the difference is really about which fields are included in the JSON body, rather than the language used.

If user use deepseek 3.1 terminus:
JavaScript:
import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: '$NVIDIA_API_KEY',
  baseURL: 'https://integrate.api.nvidia.com/v1',
})

async function main() {
  const completion = await openai.chat.completions.create({
    model: "deepseek-ai/deepseek-v3.1-terminus",
    messages: [{"role":"user","content":""}],
    temperature: 0.2,
    top_p: 0.7,
    max_tokens: 16384,
    chat_template_kwargs: {"thinking":true},
    stream: true
  })

  for await (const chunk of completion) {
        const reasoning = chunk.choices[0]?.delta?.reasoning_content;
    if (reasoning) process.stdout.write(reasoning);
        process.stdout.write(chunk.choices[0]?.delta?.content || '')

  }

}

main();
If user use llama-3.3-nemotron-super-49b-v1.5
JavaScript:
import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: '$NVIDIA_API_KEY',
  baseURL: 'https://integrate.api.nvidia.com/v1',
})

async function main() {
  const completion = await openai.chat.completions.create({
    model: "nvidia/llama-3.3-nemotron-super-49b-v1.5",
    messages: [{"role":"system","content":"/think"}],
    temperature: 0.6,
    top_p: 0.95,
    max_tokens: 65536,
    frequency_penalty: 0,
    presence_penalty: 0,
    stream: true,
  })

  for await (const chunk of completion) {
    process.stdout.write(chunk.choices[0]?.delta?.content || '')
  }

}

main();
Well, as I told you, I still don't have an api key, they still didn't respond at all, and if asked for a phone number I will tell them to go fuck themselves.
So why not just do it yourself and tell me if I should add that as a toggle?
Just change the 2 bits in the DSLREngine.js. (One for Fullbatch, one for single requests.)

I assume you mean from this:
JavaScript:
const response = await fetch(apiUrl, {
    method: 'POST',
    headers: Object.assign({
            'Content-Type': 'application/json'
        },
        apiKey ? {
            'Authorization': `Bearer ${apiKey}`
        } : {}
    ),
    body: JSON.stringify({
        model: activeLLM,
        messages: messages,
        temperature: llmTemp,
        top_k: llmTopK,
        top_p: llmTopP,
        min_p: llmMinP,
        repeat_penalty: llmRepeatPen,
        max_tokens: maxTokenValue
    })
});
to this:
JavaScript:
const response = await fetch(apiUrl, {
    method: 'POST',
    headers: Object.assign({
            'Content-Type': 'application/json'
        },
        apiKey ? {
            'Authorization': `Bearer ${apiKey}`
        } : {}
    ),
    body: JSON.stringify({
        model: activeLLM,
        messages: messages,
        temperature: llmTemp,
        top_k: llmTopK,
        top_p: llmTopP,
        min_p: llmMinP,
        repeat_penalty: llmRepeatPen,
        max_tokens: maxTokenValue,
        response_format: { "type": "json_object" }  // Add this line
    })
});