I don't know which RPG MAKER this is. It is neither RPG Maker VX Ace nor RPG Maker MZ nor RPG Maker MV.Try Translator ++, using Sugoi. Translator ++ is made to translate RPG maker games in a more efficient way.
I don't know which RPG MAKER this is. It is neither RPG Maker VX Ace nor RPG Maker MZ nor RPG Maker MV.Try Translator ++, using Sugoi. Translator ++ is made to translate RPG maker games in a more efficient way.
All the text stored in \spt\ in simple txt files, only with different extension (spt). Just copy those files somewhere, change spt to txt and open with Notepad++ or something that can figure encoding automatically. What you need is the text in them that goes after msgt (not all files have them). Maybe there's some other spots you'll have to translate, but you can figure it yourself. You also can translate txt files with Sugoi (notice, that it can deal only with simple txt saved in urf-8), then make sure that non-msgt Japanese text stayed the same, and then change txt to spt.I don't know which RPG MAKER this is. It is neither RPG Maker VX Ace nor RPG Maker MZ nor RPG Maker MV.
Can you reupload this on other sites like gofiles or mega? Pixeldrain is blocked in my countrySugoi_Toolkit_v8.0
You must be registered to see the links
For preservation.
And also modified menu for those who don't want to see the site every time (file with the original window included).
Can you reupload this on other sites like gofiles or mega? Pixeldrain is blocked in my countrytysm
Here's hoping its an actual updated model and not just the bloatware that barely works (For me, anyways)it's happening , a new version is coming out on the 19.You must be registered to see the links
That is unlikely to happen, ever. The developer already stated that there is not really a way to improve the current model further. It has run its course in terms of technological development and there would be, at best, only incremental updates possible now which makes it not really worth the GPU time to retrain the model. Hence, there is no reason for the developer to bother improving it anymore.Here's hoping its an actual updated model and not just the bloatware that barely works (For me, anyways)
Mixtral is pretty 'old' at this point. Try this oneThat is unlikely to happen, ever. The developer already stated that there is not really a way to improve the current model further. It has run its course in terms of technological development and there would be, at best, only incremental updates possible now which makes it not really worth the GPU time to retrain the model. Hence, there is no reason for the developer to bother improving it anymore.
If you want better translations than what can Sugoi can produce now, that requires either adding dictionaries to hardcode specific translations like SLR Translator does which fixes a lot of quirks that Sugoi has, but leaves the subjects messed up of course like all NMTs, or just go ahead and use the superior technology of LLMs instead. The successor technology to the NMT technology used the Sugoi Toolkit is Large Language Models (LLMs), so if you want any practical improvements over Sugoi, then look into using AI translations instead.
I did a comparison between Sugoi, DeepL, Mixtral8x7b. The results were that Sugoi is better than LLMs without context, but with context, LLMs are better at the cost of significantly increased computational time and reduced automation. For the minimal computation time that it has, the Sugoi NMT model included Sugoi Offline Translator v4, included in Sugoi since Sugoi Toolkit v6, is the best quality realistically possible for any JPN->ENG NMT model.
Figured, I just wanted to believe... I guess the only other thing I hope is that he improves his OCR (Also, probably unlikely)That is unlikely to happen, ever. The developer already stated that there is not really a way to improve the current model further.
Mixtral is pretty 'old' at this point.
rank | model | Accuracy |
1 | openai/gpt-4o-2024-05-13 | 0.747988 |
2 | anthropic/claude-3.5-sonnet | 0.747447 |
4 | nvidia/nemotron-4-340b-instruct | 0.719268 |
5 | lmg-anon/vntl-gemma2-27b_q5_k_m | 0.703626 |
6 | qwen/qwen-2-72b-instruct | 0.696493 |
7 | openai/gpt-3.5-turbo-1106 | 0.694348 |
8 | lmg-anon/vntl-llama3-8b-q8_0 | 0.68871 |
9 | google/gemma-2-27b-it_Q5_K_M | 0.68277 |
11 | mistralai/mixtral-8x22b-instruct | 0.678332 |
12 | cohere/command-r-plus | 0.674124 |
18 | meta-llama/llama-3-70b-instruct_Q4_K_M | 0.658825 |
25 | meta-llama/llama-3-70b-instruct | 0.63304 |
28 | mistralai/mixtral-8x7b-instruct | 0.616399 |
31 | meta-llama/llama-3-8b-instruct_Q8_0 | 0.604868 |
32 | cohere/command-r | 0.601418 |
- | Sugoi Translator | 0.6093 |
- | Google Translate | 0.5395 |
- | Naver Papago | 0.4560 |
- | Alibaba Translate | 0.4089 |
The developer has aFigured, I just wanted to believe... I guess the only other thing I hope is that he improves his OCR (Also, probably unlikely)
Quantization is a bit weird in a sense that it introduces more noise so to speak. It may have just tipped the scale in one particular run. Or it's just some quirk of llama.cpp since tokenizer may not be exactly one to one.The above results are not entirely believable. There is no way the quantized version of llama3-70b-instruct should perform better than the cloud version which makes me question the validity of the test.
I didn't really check the methodology behind testing, but isn't evaluation set different from the training one?In addition, the dataset used to train the models, and the test itself also included a lot of Kanji names. There is no way to correctly translate those without the person saying how their name should be said in the text. Since the vntl dataset includes a lot of those hardcoded mappings, if the test checks for them and considers them as part of the ranking, then the results are basically cheating and boosting the vntl models higher than they truthfully belong.
I heard that the author behind the fine tunes doesn't really recommend gemma over llama 8b. I guess it makes sense because it's totally feasible to fit llama entirely in your gpu for blazing fast translation, while gemma is quite big and isn't that much better. (Plus google fudged something up with gemma2 release, no one knows what's up with it really.)Still, it is an interesting leaderboard. If the results are taken at face value, vntl-gemma2-27b should be better than the llama3-8b version. And as I said earlier, and as my results showed, the difference between Sugoi and LLMs, especially without context is not very large. Sugoi holds up well given its limitations.
Shouldn'tI was thinking of whipping up some sort of gui that would work with llama.cpp's server.
Kobold is a chat frontend/launcher for llama.cpp. I meant something like sugoi's interface where you copy input into your clipboard and it sends it for translation.Shouldn'tYou must be registered to see the linksalready do this? I haven't tried anything yet, only read, so I don't know.
can we use sugio to translate offline text file? Can you please drop your thoughts on my TL question threadHere are the release notes:You must be registered to see the links
For the offline model:
"Sugoi Offline Model is now using CT2 package by default, replacing previous fairseq library. Accuracy is about the same while CPU processing speed is twice as fast (even more so when enabling GPU)."
It can translate txt files, but it's better to copy lines into spreadsheet (Excel and the likes) and then use Sugoi with Translator++. With enough effort and ingenuity you can translate even those games that T++ can't parse on its own.can we use sugio to translate offline text file?
So it should be about the same speed as FuwaNovel's repackage of the tool into CTrans2?it's outYou must be registered to see the links
Yes, minus cache of course.So it should be about the same speed as FuwaNovel's repackage of the tool into CTrans2?