CREATE YOUR AI CUM SLUT ON CANDY.AI TRY FOR FREE
x

Tool RPGM DazedMTLTool - A tool that provides quality MTL translations using ChatGPT

T-Block127

New Member
Jun 1, 2022
1
0
What am I doing wrong if I get this error?

Error code: 404 - {'error': {'message': 'The model `gpt-4-1106-preview` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
 

ufjoif

Newbie
Jan 6, 2019
18
11
Not sure I understand. The estimate is based on dollars and is calculated by going through the files and counting all the tokens using OpenAI's tokenizer. I have to guess what the output is going to be so its not 100% accurate, but generally the estimate is always around 20% higher than what the actual cost will be.
1714985480298.png

What, exactly, does 0.002 represent? I find that when I do translations using the latest model, GPT-3.5-turbo-0125, my prices are enormously lower than the estimate - around 10% of the given estimate. I notice that you use a different model spec in your default ENV config - that may cost more than the latest one. The newer models are better and cheaper. Anyway, the OpenAI pricing page lists prices in dollars per million tokens - I'd like to update the pricing estimate to be closer to the actual value I'm using.

1714985712393.png

Those two spikes represent two full passes of the game I was working on translating, plus additional passes. The cost estimate was $11 per pass, but as you can see, the actual cost was enormously lower.

Are you actually just using the old, expensive model? You could be spending 90% less money.
 
  • Like
Reactions: jaden_yuki

jaden_yuki

Active Member
Jul 11, 2017
931
708
how does this tool deal with CSVs? does it receive only one file? or a folder with multiple files?
 

dazedanon

Engaged Member
Modder
Uploader
Donor
Jul 24, 2017
2,537
28,615
View attachment 3608110

What, exactly, does 0.002 represent? I find that when I do translations using the latest model, GPT-3.5-turbo-0125, my prices are enormously lower than the estimate - around 10% of the given estimate. I notice that you use a different model spec in your default ENV config - that may cost more than the latest one. The newer models are better and cheaper. Anyway, the OpenAI pricing page lists prices in dollars per million tokens - I'd like to update the pricing estimate to be closer to the actual value I'm using.

View attachment 3608112

Those two spikes represent two full passes of the game I was working on translating, plus additional passes. The cost estimate was $11 per pass, but as you can see, the actual cost was enormously lower.

Are you actually just using the old, expensive model? You could be spending 90% less money.
The 0.002 is the cents per token. It might be out of date I don't use 3.5 anymore.

I'm using GPT-4-Turbo which is 0.01 and 0.03 per token. Since the estimate doesn't account for requests that fail, it multiplies the entire cost of each request by 2. That's why the estimate is always higher than the actual cost.

The estimate is essentially a worst case scenario cost analysis. Basically if you are going to invest in a game, it tells you the max it can potentially cost. But often the actual costs are lower because not every requests is going to fail obviously. The whole purpose is to make sure you know exactly how much you are going to likely spend.
 

ufjoif

Newbie
Jan 6, 2019
18
11
The 0.002 is the cents per token. It might be out of date I don't use 3.5 anymore.

I'm using GPT-4-Turbo which is 0.01 and 0.03 per token. Since the estimate doesn't account for requests that fail, it multiplies the entire cost of each request by 2. That's why the estimate is always higher than the actual cost.

The estimate is essentially a worst case scenario cost analysis. Basically if you are going to invest in a game, it tells you the max it can potentially cost. But often the actual costs are lower because not every requests is going to fail obviously. The whole purpose is to make sure you know exactly how much you are going to likely spend.
Consider trying 3.5-turbo-0125. It's pretty good.
 
Mar 12, 2018
300
562
Just got a message from GPT.


"Hi there,

We launched GPT-4o in the API—our new flagship model that’s as smart as GPT-4 Turbo and much more efficient. We’re passing on the benefits of the model’s efficiencies to developers, including:

  • 50% lower . GPT-4o is 50% cheaper than GPT-4 Turbo, across both input tokens ($5 per 1 million tokens) and output tokens ($15 per 1 million tokens).
  • 2x faster latency. GPT-4o is 2x faster than GPT-4 Turbo.
  • 5x higher . Over the coming weeks, GPT-4o will ramp to 5x those of GPT-4 Turbo—up to 10 million tokens per minute for developers with high usage."
Let's hope this will bring your translations to the next level of quality and availability.
 

revyfan

Newbie
Jan 26, 2018
65
51
Whelp, it's official, OpenAI has done it again. Absolute amazing model for translations for the price. I don't even understand how they did it.

COMPARISON TIME:

Welcome back to my dumb ramblings for the mentally ill (Me)!

Today I asked GPT4-turbo and GPT4o to translate a paragraph using the same presets and context. As you could possibly imagine, GPT4-Turbo did good!
1715758143317.png

Now using less money and being twice as fast, I was expecting 4o to be either the same or slightly lower quality, but!
1715758309268.png

Honestly, other than the very obvious contextual change between the two near the end, it's mostly the same BUT the fact the 4o somehow got the context right (The boy being raped by the hatched insects) despite almost every other model I tested with this same paragraph getting it wrong (Opus, I believe got it right, but it costs too much money to really care) it fucking amazes me. We are absolutely living in the future, and I can't wait to see how other companies respond to such a good leap between quality and pricing!

Here's the original text for anyone who want's to give it a whirl

Xラクガキ インセクト女王


遊戯王のラクガキ


世界でカードに力が宿り精霊カードが出現するようになった。友達は霊使いやBMGなど美少女モンスターが出現しイチャイチャして貰えて羨ましがってた少年。そんな彼の元に出て来たのはインセクト女王である…
そのデカさと悍ましさにビビってしまった少年…しかしインセクト女王はマスターである少年に発情し少年の命令も聞かずに無理矢理交尾。
童貞を奪われた少年に襲いかかる恐怖と快楽…そのまま蟲の膣内で果ててしまう…女王はマスターに愛されたと解釈して何度も何度も巨大なお腹をピストンし少年を犯し続けた。


その後、インセクト女王は卵を産み孵化した自分の子でもあるインセクトモンスターにも容赦なく犯される…身も心も犯され女王のツガイとなった少年はインセクト女王達と共に姿を消したと言う。
 

dporkster1024

New Member
Aug 21, 2023
2
1
Bash script replacement for GAMEUPDATE.bat for any Linux users. Works for me on Ubuntu. Save to GAMEUPDATE.sh, give execution rights with sudo chmod +x GAMEUPDATE.sh, and run with ./GAMEUPDATE.sh.

Bash:
#!/bin/bash

# Check if patch-config.txt exists
if [ ! -f ./patch-config.txt ]; then
    echo "Config file (patch-config.txt) not found! Assuming no patching needed."
    exit 0
fi

# Read configuration from file
source ./patch-config.txt
USERNAME=$(echo $username | tr -d '\r')
REPO=$(echo $repo | tr -d '\r')
BRANCH=$(echo $branch | tr -d '\r')

# Get the latest hash
echo "Getting latest commit SHA hash."
LATEST_PATCH_SHA=$(curl -s "https://api.github.com/repos/${USERNAME}/${REPO}/branches/${BRANCH}" | sha256sum | tr -d "[:space:]-")

# Compare with previous hash
if [ -f previous_patch_sha.txt ]; then
    PREVIOUS_PATCH_SHA=$(head -n 1 previous_patch_sha.txt | tr -d '\r')
    
    if [ "$LATEST_PATCH_SHA" = "$PREVIOUS_PATCH_SHA" ]; then
        echo "Patch is up to date."
        exit 0
    else
        echo "Update found! Patching..."
    fi
else
    echo "Previous SHA hash not found!"
    echo "Assuming first time patching..."
fi

# Download zip file
echo "Downloading latest patch..."
curl -s https://codeload.github.com/$USERNAME/$REPO/zip/refs/heads/$BRANCH -o repo.zip

# Extract contents
echo "Extracting..."
rm -fr $REPO-$BRANCH
unzip -qq repo.zip

# Apply patch
echo "Applying patch..."
cp -r $REPO-$BRANCH/* ./

# Clean up
echo "Cleaning up..."
rm -f repo.zip
rm -fr $REPO-$BRANCH

# Store latest SHA for next check
echo -n $LATEST_PATCH_SHA > previous_patch_sha.txt
 

wgoo21

New Member
Feb 29, 2020
4
5
Is it possible to use a huggingface model instead of openai's chatgpt? Since those can be free, locally hosted, and unrestricted
It's definitely possible. Such solutions already exist; for example, there's a module called Sakura for BallonsTranslator. I haven't used it myself, but if I understand correctly, the module allows for the use of the local model Sakura-13B-Galgame for translating from Japanese to Chinese.

It's obvious that a similar approach can be used with any other model, but the translation quality from Japanese to English with models that can run on relatively modern PCs is very poor (I tested a bunch of different models about half a year ago, and as far as I know, there haven't been any significant breakthroughs; even llama 3 doesn't yield particularly good results)
 
  • Like
Reactions: Techno11244
Jul 27, 2019
63
111
Anyone here tried translating waffle games using this? It's missed about 10% of the text I'm trying to work on which is quite a lot in this case, probably a little over 1000 line misses if we convert that percent to lines . Also yes if you click on the link, it has a translator. But he has been radio silent for two years on everything so I'm just assuming his translation for this is dead. Along with the fact lots of people struggle to hook this with textractor. Since even with the right hook code it doesn't work, you need to change a few file names for it to hook right.
 

dazedanon

Engaged Member
Modder
Uploader
Donor
Jul 24, 2017
2,537
28,615
Anyone here tried translating waffle games using this? It's missed about 10% of the text I'm trying to work on which is quite a lot in this case, probably a little over 1000 line misses if we convert that percent to lines . Also yes if you click on the link, it has a translator. But he has been radio silent for two years on everything so I'm just assuming his translation for this is dead. Along with the fact lots of people struggle to hook this with textractor. Since even with the right hook code it doesn't work, you need to change a few file names for it to hook right.
Dont think ive used it on Waffle before. Is this using the csv module?
 
Jul 27, 2019
63
111
Dont think ive used it on Waffle before. Is this using the csv module?
No it's using the anim module. I'm new to this so I'm just kinda guessing that the .csv wouldn't work since the unpacking tool I use. Unpacks the scr.pak into a json file.
Found tool in this thread: https://f95zone.to/threads/translation-request-maki-chan-to-now-waffle.99167/
Link to the actual tool:

I also tried loading with the other .json modules, but none of them would even load except for anim so I kinda just went with it.
 
Last edited:

dazedanon

Engaged Member
Modder
Uploader
Donor
Jul 24, 2017
2,537
28,615
No it's using the anim model. I'm new to this so I'm just kinda guessing that the .csv wouldn't work since the unpacking tool I use. Unpacks the scr.pak into a json file.
Found tool in this thread: https://f95zone.to/threads/translation-request-maki-chan-to-now-waffle.99167/
Link to the actual tool:

I also tried loading with the other .json modules, but none of them would even load except for anim so I kinda just went with it.
Maybe theres something special about the lines that its missing? That module is built for anim after all so might not be a perfect fit.
 
Jul 27, 2019
63
111
Maybe theres something special about the lines that its missing? That module is built for anim after all so might not be a perfect fit.
Ah shit right. All right I'll go take a look at an anim game then and compare. As for now though I've tried tweaking my prompt and doing a second pass. I'll keep you posted.
 
Jul 27, 2019
63
111
Alright so I looked through the json TL in: https://f95zone.to/threads/wifes-pussy-transformed-while-im-away-final-anim-teammm.177100/ to compare anim and waffle.

And the line structure seems to be the same namely: "japanese test": "english text",

Though I've done four passes so far and something is setting my alarm off just a little bit. If I do a search for "" texts so when the translation doesn't happen. They decrease at a very linear rate like this:

First pass empty translations: 1865
Second pass empty translations: 1580
Third pass empty translations: 1291
fourth pass: 1060

First to second pass translations completed: 285
Second pass to third pass translations completed: 289
Third pass to second pass translations completed: 231

There's a dip at third pass but first two were right next to each other. So there's probably some setting/prompt adjustment since the issue is fairly consistent. If I ever decide to do another waffle game, maybe I'll look into it. But for now I'll just brute force it I guess since I'm only using gpt3.5 api cost is low and if I want to publish it I'll take the month or two to edit it.
 

dazedanon

Engaged Member
Modder
Uploader
Donor
Jul 24, 2017
2,537
28,615
Alright so I looked through the json TL in: https://f95zone.to/threads/wifes-pussy-transformed-while-im-away-final-anim-teammm.177100/ to compare anim and waffle.

And the line structure seems to be the same namely: "japanese test": "english text",

Though I've done four passes so far and something is setting my alarm off just a little bit. If I do a search for "" texts so when the translation doesn't happen. They decrease at a very linear rate like this:

First pass empty translations: 1865
Second pass empty translations: 1580
Third pass empty translations: 1291
fourth pass: 1060

First to second pass translations completed: 285
Second pass to third pass translations completed: 289
Third pass to second pass translations completed: 231

There's a dip at third pass but first two were right next to each other. So there's probably some setting/prompt adjustment since the issue is fairly consistent. If I ever decide to do another waffle game, maybe I'll look into it. But for now I'll just brute force it I guess since I'm only using gpt3.5 api cost is low and if I want to publish it I'll take the month or two to edit it.
What might be happening is since you are using 3.5, the ai isnt smart enough and there are more mismatches happening.

Try setting the batch to a smaller number at the top of the file.
 
Jul 27, 2019
63
111
What might be happening is since you are using 3.5, the ai isnt smart enough and there are more mismatches happening.

Try setting the batch to a smaller number at the top of the file.
Yup you were 100% right decreased batch size and the number of fixes each pass doubled. Now there's only like 50 lines or so that I have to do by hand because it just can't grab those for some reason. So thanks you just decreased my workload from a few hours to a little under half an hour.
 
  • Like
Reactions: dazedanon