Hi
Shisaye,
I’ve been experimenting with the SLR translator (version
2.029) using the DeepSeek 3.1 Terminus model from
You must be registered to see the links
’s build site. The usage limit is 40 requests per minute for free tier, but unlike other servers such as
You must be registered to see the links
, I couldn’t find any daily token limit.
I tested it through
You must be registered to see the links
and ended up consuming more than 6 million tokens in just a few hours, mainly to check which values of
temperature and
top_p would give the best results.
In the DSLR section of your translator, the only fields I modified were:
However, when I checked the console (F12), I saw these errors:
[SLRPersistentCacheHandler] No cache found for dslr.
[SLRPersistentCacheHandler] Attempting to load backup cache for dslr.
Failed to load resource: the server responded with a status of 400 ()
[DSLR] Response failed ok test. Response Error Status: 400
redGoogle is not a translator engine.
My initial guess is that the request code might be different from what the server expects.
Code:
from openai import OpenAI
client = OpenAI(
base_url = "https://integrate.api.nvidia.com/v1",
api_key = "$NVIDIA_API_KEY"
)
completion = client.chat.completions.create(
model="deepseek-ai/deepseek-v3.1-terminus",
messages=[{"role":"user","content":""}],
temperature=0.1,
top_p=0.7,
max_tokens=16384,
extra_body={"chat_template_kwargs": {"thinking":True}},
stream=True
)
for chunk in completion:
reasoning = getattr(chunk.choices[0].delta, "reasoning_content", None)
if reasoning:
print(reasoning, end="")
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
I recommend signing up, verify your account and trying the AI directly on NVIDIA’s server. That way, you should be able to reproduce the same error and confirm whether it’s an issue with the request format or something else.