Tool Sugoi - a translate tool with offline AI-powered model to translate from Japanese; DeePL competitor

Zippix

Well-Known Member
Sep 7, 2017
1,631
1,093
Lol, the amount of headache I've been having with trying to somehow get CT2 working with a non-CUDA capable GPU lately...
At any rate, it's not that straightforward. The most easily accessible script for the v6.0 toolkit which supposedly gives CUDA and CT2 both (and separately), actually slows down the TL for me (when using only the CT2/cpu part, compared to the vanilla v6.0).

There actually is a working UI mod for v4.0 of the toolkit with actually faster CT2 (as it should be), but good luck trying to make it work with the most recent v6.0 (though I'd reckon that one who is tech-savy, will be able to do it - or at least get the script file/files from it that could be used in the 6.0 too, with some changes).

Also, yeah, that "Aguuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu" shit the translator pulls from time to time is annoying, and it's not just Sugoi-specific (Google just flat out cuts a big chunk of the TL, DeepL does something similar, can give you "Aguuuuuuuuuuuuuuuuuuu" too, can even give you some standard bullshit sentence with absolutely zero connection to the text you are trying to TL). Though it's quite rare in my experience, thankfully. Will try out that edit to the python script suggested, nonetheless; thanks. If it slows things down too much generally, it may not be worth it...

EDIT:
So there is a parameter that fixes the repeat, but it makes the translator 2x slower so you really should combine it with either the CT2 patch or, better, the Nvidia CUDA patch.

As mentioned CT2 speeds up translation speed ~4-5x and CUDA speeds it up 10x.

NOTE: If I understand right, the CT2/CUDA mod already has this installed, as long as you activate from the Sugoi-Translator-Offline-CT2 (click here).bat script.


Edit flaskserver.py in .\Code\backendServer\Program-Backend\Sugoi-Japanese-Translator\offlineTranslation\fairseq


View attachment 2746785
Haven't tried it yet, but the issue I'm seeing immediately is that when you use e.g. the CT2 only option (22), it's not flaskserver.py it's calling...
 

citron

Newbie
Dec 27, 2016
41
59
Haven't tried it yet, but the issue I'm seeing immediately is that when you use e.g. the CT2 only option (22), it's not flaskserver.py it's calling...
If you are using CT2 on Toolkit V2.5, it calls the file located at
".\Code\backendServer\Program-Backend\Sugoi-Japanese-Translator\offlineTranslation\ct2\flaskServerCt2.py"

and nerdman83 is correct, CT2 already uses those parameters
Python:
result = translator.translate_batch(
                            source=text,
                            beam_size=beam_size,
                            num_hypotheses=num_hypotheses,
                            no_repeat_ngram_size = 3)
If you want to remove the "<unk>" output that Sugoi sometimes creates, you can modify the line
"finalResult = finalResult.replace"
 

nerdman83

Newbie
Mar 12, 2019
73
79
Lol, the amount of headache I've been having with trying to somehow get CT2 working with a non-CUDA capable GPU lately...
At any rate, it's not that straightforward. The most easily accessible script for the v6.0 toolkit which supposedly gives CUDA and CT2 both (and separately), actually slows down the TL for me (when using only the CT2/cpu part, compared to the vanilla v6.0).
The CudaInstallForToolKit6 I posted is pretty straightforward to install. I had the least amount of issues with it. If anything it made me realize my v4 installation was botched.

Just make sure you install all steps, including translating CT2 model.


Also there's one final step that's easy to miss, when you use the new Sugoi-Translator-Offline-CT2 (click here).bat command, you need to do the 4 options at the bottom the first time to activate CUDA / CT2 / CT2 fast.


CT2 fast can sometimes be faster than CUDA actually (you avoid the PCI Express roundtrip), it's slightly "less accurate" though (dunno what that means). CUDA is just better if you're doing whole game translation since you can massively thread it, I was running translations on 20 files at at time using python.ThreadPool.imap
 

Zippix

Well-Known Member
Sep 7, 2017
1,631
1,093
If you are using CT2 on Toolkit V2.5, it calls the file located at
".\Code\backendServer\Program-Backend\Sugoi-Japanese-Translator\offlineTranslation\ct2\flaskServerCt2.py"

and nerdman83 is correct, CT2 already uses those parameters
Python:
result = translator.translate_batch(
                            source=text,
                            beam_size=beam_size,
                            num_hypotheses=num_hypotheses,
                            no_repeat_ngram_size = 3)
If you want to remove the "<unk>" output that Sugoi sometimes creates, you can modify the line
"finalResult = finalResult.replace"
Which script is that?
Just noticed that that install script from the Discord server for v6.0 was actually shared here, lol. I know it should be easy, and it is, just install ct2 (option 5), then switch to CPU mode (22), and yeah, should be set to start offline with ct2 (option 1). The thing is, though they are the very same, flaskServerCt2_cpu.py is used, and I see no such things in the code. EDIT: found it at line 64, my bad.

Still, now I'm wondering if that is the cause for it to be slower than vanilla 6.0, at least in my case.
it's slightly "less accurate" though (dunno what that means)
Nah, 'cpufast' is quite a bit "less accurate", barely gives the same output (and for the worse), according to my few test runs; much faster though, indeed.
 
Last edited:

shiny-kuki

Member
May 6, 2020
284
212
If you want to remove the "<unk>" output that Sugoi sometimes creates, you can modify the line
"finalResult = finalResult.replace"
I have a disable_unk variable in mine, it's commented for some reason

1688665063826.png

also if CT2 calls a different file than the "test" I did yesterday was a bunch of nothing
 

Zippix

Well-Known Member
Sep 7, 2017
1,631
1,093
I have a disable_unk variable in mine, it's commented for some reason

View attachment 2749457

also if CT2 calls a different file than the "test" I did yesterday was a bunch of nothing
The second command prompt window on the taskbar shows which python script the offline sugoi is using, see below (for my v6.0 with CT2, cpu mode).
1688665748077.png
(EDIT: which is "\Code\backendServer\Program-Backend\Sugoi-Japanese-Translator\offlineTranslation\ct2\flaskServerCt2_cpu.py", to be precise)
 
Last edited:

nerdman83

Newbie
Mar 12, 2019
73
79
Can someone explain how to install CudaInstallForToolKit6?
You need Windows Powershell (search online how to activate).
Also Python 3.x installed (Python website or Microsoft Store)
Then extract it with "Extract Here" (it doesn't matter where you extract it, it will ask where your Sugoi directory is during install, but I put it inside my Sugoi folder anyway). It'll extract to a CudaInstall directory.
Go into the CudaInstall directory, right-click install-cuda.ps1, and click Run with Windows Powershell


Then you have to install all the various parts you need (don't have to install Cuda if you don't have a NVIDIA card).

When you do the steps a popup directory selector should show up asking where to install to the tool, you have to show it the base directory of Sugoi (where Sugoi-Toolkit (click here).bat is).
 
Last edited:

nerdman83

Newbie
Mar 12, 2019
73
79
just install ct2 (option 5), then switch to CPU mode (22)
Did you do Option 8 like it says? You have to convert the model to a CT2 model.

You should have a .\Code\backendServer\Program-Backend\Sugoi-Japanese-Translator\offlineTranslation\ct2\ct2_models\model.bin

It should be Option 5, then Option 8, then close the installer and run Sugoi-Translator-Offline-CT2 (click here).bat. Then option 22 to change to cpu mode.

Still, now I'm wondering if that is the cause for it to be slower than vanilla 6.0, at least in my case.
My observation / understanding on CT2 vs vanilla.

CT2 doesn't necessarily speed up translation times (it does, about 15% in my test) but it lowers CPU usage and it also lowers variability in translation times.

I notice with vanilla translation times are all over the place. With CT2 they're a bit more consistent (delay directly corresponding to length of sentence).


Here's my translation time test (partially done for my sanity to confirm whether CT2 or CUDA is better), you can right click and copy these to clipboard with Sugoi open and it'll translate anything in the clipboard. Remember the first translation is always slower as it loads and initializes the model.

From Lusterise Kouyoku Senki ExS-Tia 1
Code:
Vanilla:
translation time: 2748.076ms
 葛城真理奈――創真の家の隣に住んでいる少女である。昔から家族ぐるみの付き合いをしていて、まるで兄妹の様にずっと一緒に過ごしてきた幼馴染みだ。
Marina Katsuragi—the girl who lives next door to Souma's house. She's a childhood friend who's known her whole family for a long time, and who's lived together like siblings for a long time.

Vanilla w/ ngram:
translation time: 2810.604ms
 葛城真理奈――創真の家の隣に住んでいる少女である。昔から家族ぐるみの付き合いをしていて、まるで兄妹の様にずっと一緒に過ごしてきた幼馴染みだ。
Marina Katsuragi—the girl who lives next door to Souma's house. She's a childhood friend who's known her whole family for a long time, and has spent a lot of time together like siblings.

CT2 w/ ngram:
Request ( 1 ):  ['\u3000葛城真理奈――創真の家の隣に住んでいる少女である。昔から家族ぐるみの付き合いをしていて、まるで兄妹の様にずっと一緒に過ごしてきた幼馴染みだ。']
Translation ( 2.27 s):  Marina Katsuragi—the girl who lives next door to Souma's house. She's a childhood friend who's known her whole family for a long time, and has spent a lot of time together like siblings.
127.0.0.1 - - [11/Jul/2023 11:15:06] "POST / HTTP/1.1" 200 -

CUDA w/ ngram:
Request ( 1 ):  ['\u3000葛城真理奈――創真の家の隣に住んでいる少女である。昔から家族ぐるみの付き合いをしていて、まるで兄妹の様にずっと一緒に過ごしてきた幼馴染みだ。']
Translation ( 0.47 s):  Marina Katsuragi—the girl who lives next door to Souma's house. She's a childhood friend who's known her whole family for a long time, and has spent a lot of time together like siblings.
127.0.0.1 - - [11/Jul/2023 11:18:29] "POST / HTTP/1.1" 200 -

Deepl (for comparison):
 葛城真理奈――創真の家の隣に住んでいる少女である。昔から家族ぐるみの付き合いをしていて、まるで兄妹の様にずっと一緒に過ごしてきた幼馴染みだ。
Marina Katsuragi - she is the girl who lives next door to Souma's house. They have been family friends for a long time, and have been spending all their time together as if they were brother and sister.

I just noticed that the translation is a tiny bit different with the ngram fix in some cases.

It likes to make things shorter / less wordier. I can't say the new translation is worse, it's just more succinct.

Given the severity of the repeating character bug that it fixes -- it's worth it.


The one thing I'll agree on is CT2 cpufast is very bad (cpufast, not cpu). Yes, it's 2x faster and 90% of the translations are understandable, but 10% are extremely bad.

Definitely use the regular CT2 cpu version.


If the speed bothers you, GFX cards are getting cheaper and a GTX 3060 is like $200 now. eVGA has had some B Stock 2xxx cards for $100.


A few extra ngram comparisons:

Code:
Vanilla:
translation time: 2304.181ms
 創真――紫峰創真は幸せそうな寝息を響かせる。溺れていたい。もっとこの心地良さに包まれながら……。とでも訴えるみたいに。
Souma――Shiho Souma's breathing echoes as he sleeps happily. He wants to drown. He wants to be enveloped in more of this comfort......as if to plead.

translation time: 952.431ms
「もうっ! 起きて! 起きなさい創ちゃんっ!!」
「Geez! Wake up! Wake up, Sou-chan!!」

translation time: 751.448ms
「あっつ……うつつつつぅ……」
「Ow... Ow, ow, ow...」


Vanilla w/ Ngram fix:
translation time: 2392.798ms
 創真――紫峰創真は幸せそうな寝息を響かせる。溺れていたい。もっとこの心地良さに包まれながら……。とでも訴えるみたいに。
Souma――Shiho Souma's breathing echoes as he sleeps happily. He wants to drown. Engulfed in more of this comfort......as if to plead.

translation time: 996.987ms
「もうっ! 起きて! 起きなさい創ちゃんっ!!」
「Geez! Wake up! Get up, Sou-chan!!」

translation time: 698.304ms
「あっつ……うつつつつぅ……」
「Ow... Ow ow ow...」


Deepl (Deepl sucks on these):
創真――紫峰創真は幸せそうな寝息を響かせる。溺れていたい。もっとこの心地良さに包まれながら……。とでも訴えるみたいに。
Souma - Souma Shiomine sounds like a happy sleeper.I want to drown. I want to be wrapped up in this comfort more ....... As if to say, "I want to drown myself in this comfort.

「もうっ! 起きて! 起きなさい創ちゃんっ!!」
Oh, God! Get up! Wake up, Hajime!

「あっつ……うつつつつぅ……」
Atsu...... Utsu......
 
Last edited:
  • Like
Reactions: Zippix

nerdman83

Newbie
Mar 12, 2019
73
79
There actually is a working UI mod for v4.0 of the toolkit with actually faster CT2 (as it should be)
Okay, yeah I do see now the v6 Sugoi CT2 is broken.

I went back to my old v4 w/ Offline4.0 + the older sugoi-toolkit-mod-for-4.0-fix.zip mod (posted here) with working CT2. I modded the flaskserver to spit out the translation timing.

Code:
Request ( 71 ):   葛城真理奈――創真の家の隣に住んでいる少女である。昔から家族ぐるみの付き合いをしていて、まるで兄妹の様 にずっと一緒に過ごしてきた幼馴染みだ。
Translation ( 1.43 s):  Marina Katsuragi—the girl who lives next door to Souma's house. She's a childhood friend who's known her whole family for a long time, and has spent a lot of time together like siblings.
127.0.0.1 - - [11/Jul/2023 14:17:04] "POST / HTTP/1.1" 200 -

Request ( 60 ):   創真――紫峰創真は幸せそうな寝息を響かせる。溺れていたい。もっとこの心地良さに包まれながら……。とでも訴 えるみたいに。
Translation ( 1.22 s):  Souma――Shiho Souma's breathing echoes as he sleeps happily. He wants to drown. Engulfed in this comfort more......as if to plead.
127.0.0.1 - - [11/Jul/2023 14:17:14] "POST / HTTP/1.1" 200 -
This makes me a bit more sane as I previously said a few months ago that CT2 is 2x faster than vanilla. This falls in line. CUDA is still 5x faster than vanilla.


That sugoi-toolkit-mod-for-4.0-fix.zip mod is a tiny bit more tricky to install though as there is one final step in the UI you have to click to convert the model and sometimes it fails.
 
Last edited:
  • Like
Reactions: Zippix

nerdman83

Newbie
Mar 12, 2019
73
79
Okay, one final update, I can confirm the sugoi-toolkit-mod-for-4.0-fix.zip can be installed on top of v6 anniversary (you just lose the ability to launch the new ASMR stuff -- can still launch it from the bat files directly).

Sugoi V6 Anniversary w/ Offline 4.0


sugoi-toolkit-mod-for-4.0-fix.zip

(backup)

Instructions for installing the v4 mod on top of v6 anniversary w/ CT
1. Unpack Sugoi V6 Anniversary
2. Unpack the sugoi-toolkit-mod-for-4.0-fix directly into the Sugoi V6 base directory (you should have a "setup sugoi-mod.bat" next to "Sugoi-Toolkit (click here).bat" along with a "mod" directory if you did it right).
3. Double-Click setup sugoi-mod.bat and do option 1 install. It should copy some files and then let you exit.
4. Double-Click Sugoi-Toolkit (click here).bat which should launch a new UI.
5. Look in the TOP RIGHT there's a hamburger menu. Click it, then click Install ctranslate2. Wait for it to be done.
6. EXIT THE APP.
7. Open Sugoi-Toolkit (click here).bat again.
8. Now click the hamburger menu again and click "Convert model". Wait for it to be done.
9. EXIT THE APP.
10. Open Sugoi-Toolkit (click here).bat.
11. Now you can check the box for Use ctranslate2. Select your Textractor option if you want it.


If you want to confirm the timings, here's the flask server with the timing commands added (this is for sugoi-toolkit-mod-for-4.0, the CUDA-6.0 mod already has it):


It goes in:
Sugoi-Translator-Toolkit-V6.0-Anniversary\Code\backendServer\Program-Backend\Sugoi-Japanese-Translator\offlineTranslation\fairseq


NOTE: If you are using CUDA, then the CudaInstallForToolKit6.zip mod is faster (0.46s vs 0.7s):

(backup)


NOTE2: REMEMBER Textractor has a delay setting too.

It defaults to 500 milliseconds but I turn it down to 100 milliseconds in the settings. The delay is for games that draw text character by character where Textractor grabs directly from the screen draw buffer, it needs to wait for all of them to be drawn. But most are grabbing deeper from strcpy() and such and it gets the whole thing instantly.
 
Last edited:

Zippix

Well-Known Member
Sep 7, 2017
1,631
1,093
Wow, thanks a bunch for looking into things!
And lol again, since I hunted that UI toolkit mod for v4.0 on discord (was not easy to find) and AGAIN I just find out you have already shared it here...

I also wanted to make the timings work with the modified v4.0 CT2, but ended up that script always auto-closing after my edits, so I gave up on it, eventually.

Question: could perhaps the CudaInstallForToolKit6's (after installation) CT2 flask script file '\ct2\flaskServerCt2_cpu.py' be modified to essentially be 'flaskServerCt2Mod.py' with some modifications (I know for example the model location needs changing as it puts it elsewhere, and the timings displays addition and yadayada) after one has installed CudaInstallForToolKit6 onto a v6.0?

EDIT: could you perchance make the v4 UI mod's 'flaskServerCt2.py' display timings too; because that would be great for comparison as well; for me at least.
EDIT2: wait... Could it actually be using "\mod\backendServer\Program-Backend\Sugoi-Japanese-Translator\offlineTranslation\fairseq\flaskServerCt2Mod.py" after all? :unsure:
It DOES say 'flaskServerCt2Mod' in the command prompt, hmm. Anyhoo, this one doesn't have timings, would be great if you could share yours with the changes that let it display those, too. (for a working, v4.0 ui mod CT2 comparison)
EDIT3: well, eff me, it's actually using '\Code\backendServer\Program-Backend\Sugoi-Japanese-Translator\offlineTranslation\fairseq\flaskServerCt2Mod.py'. At least finally located the bastard, lol. EDIT5: Fuck, you even mentioned this in your post, now I see... Reading comprehension ftw! :ROFLMAO:
EDIT4: welp, now I see it's for the v4.0 CT2 that you shared on mega after all, so I went and wrote in the differences into mine too, and sure enough TIMINGS yaaay; will get a comparison soon(ish).

Timing comparisons:
You don't have permission to view the spoiler content. Log in or register now.
So yeah, something's defo not working with CT2 for v6.0, also, ngram -for me, and CT2 at least- doesn't slow it even a bit down on these sentences; though yeah, other circumstances (sentences) may make it show its lag. Or not.
 
Last edited:

nerdman83

Newbie
Mar 12, 2019
73
79
Wow, thanks a bunch for looking into things!
And lol again, since I hunted that UI toolkit mod for v4.0 on discord (was not easy to find) and AGAIN I just find out you have already shared it here...

I also wanted to make the timings work with the modified v4.0 CT2, but ended up that script always auto-closing after my edits, so I gave up on it, eventually.

Question: could perhaps the CudaInstallForToolKit6's (after installation) CT2 flask script file '\ct2\flaskServerCt2_cpu.py' be modified to essentially be 'flaskServerCt2Mod.py' with some modifications (I know for example the model location needs changing as it puts it elsewhere, and the timings displays addition and yadayada) after one has installed CudaInstallForToolKit6 onto a v6.0?
The timing is timing the translation file, so it needs to be done in the flaskServer file.

Yeah, the flaskServers could be combined into one master flaskServer file, but that requires a lot of debug. I just modify all of them.


Honestly I just keep two Sugoi directories, one with the 4.0 mod and one with the 6.0 mod.
 
Last edited:

nerdman83

Newbie
Mar 12, 2019
73
79
So yeah, something's defo not working with CT2 for v6.0, also, ngram -for me, and CT2 at least- doesn't slow it even a bit down on these sentences; though yeah, other circumstances (sentences) may make it show its lag. Or not.
You must have a crappier CPU, I have a Ryzen 5600x

If you have a modern GPU (even AMD) there's one last thing you could try. Someone made a mod that uses Microsoft DirectML API for Sugoi:



On my machine it was slightly slower than CT2 (1.5s vs 1.2s). It's also jankier to use, as the script only starts the server so you have to go into the \Code\ directory and start Textractor yourself.

For most people the mod-4.0 Sugoi is recommended but I suppose someone might have an older CPU with a higher end GPU that might benefit.
 
  • Like
Reactions: Zippix

Zippix

Well-Known Member
Sep 7, 2017
1,631
1,093
You must have a crappier CPU, I have a Ryzen 5600x

If you have a modern GPU (even AMD) there's one last thing you could try. Someone made a mod that uses Microsoft DirectML API for Sugoi:



On my machine it was slightly slower than CT2 (1.5s vs 1.2s). It's also jankier to use, as the script only starts the server so you have to go into the \Code\ directory and start Textractor yourself.

For most people the mod-4.0 Sugoi is recommended but I suppose someone might have an older CPU with a higher end GPU that might benefit.
Yeah, I have quite literally by accident stumbled upon it in the (discord) thread too (I don't think it's even pinned anywhere). Was also about to mention it as an alternative here.
You don't have permission to view the spoiler content. Log in or register now.
Very close to each other on my end ("working" CT2 and DML), guess it's just preference (whether I want my GPU involved or not). Some barely noticeable speed bump for DML, I guess.
Also, right on the money regarding the CPU: a meager Ryzen 2600, but serves me just fine for now. That will explain your regular 15-20% faster TLs on vanilla. Still, the CT2 for v6.0 is effed, for sure.
Maybe your more recent gen ryzen can make use of it better (and/or script optimized for larger caches or whatevs), hence the modest gains for you and slowdown for me (which is still weird) for the v6.0 CT2, but the CT2 script from the v4.0 UI mod works great for both of us, giving me a ~40% boost and you a ~45-50% one (there's that 2x you were talking about).

Wish they all could be on one install of Sugoi V6.0, but the old CT2, the new (not correctly working) CT2, and DML screw each other over when you want them in one dir (not surprisingly; even kill the vanilla, though too, lol) and none will work. With a big bunch of debug probs could be made to work - but that's above my paygrade. One-in-all would rule!
So many GBs wasted. xD Also, that v6.0 toolkit CT2 and its 4GB of torch libraries installed, doesn't help either.
On my machine it was slightly slower than CT2 (1.5s vs 1.2s). It's also jankier to use, as the script only starts the server so you have to go into the \Code\ directory and start Textractor yourself.
Not the case on my end. Works just fine and without any hiccup with starting "Sugoi-Translator-Offline-DML.bat" from the main dir. (could be because they don't like to cohabit with one-another as I mentioned earlier; have you tried it on an untouched V6.0?)
 
Last edited:

Porkman

Member
Dec 30, 2017
149
78
Has anyone go the problem of DeepL translating japanese into dutch? Happens with some of the hames i'm playing and wondering what i can do about it.
 

nerdman83

Newbie
Mar 12, 2019
73
79
Has anyone go the problem of DeepL translating japanese into dutch? Happens with some of the hames i'm playing and wondering what i can do about it.
No clue. The sugoi maker has a discord.

I do know deepl will sometimes barf on stuff it can't translate. It's normal.
 

yuuy22

Newbie
Oct 9, 2018
94
29
Is there a way to set this up to translate a manual translation json file made from AdventCirno's tool? I can't seem to find a 'free' online json translator that can translate some 91k keys and this works just fine for me on games so it should be able to translate the game through the manual translation dump but I have no idea how to go about doing that.
 

R.

Newbie
Nov 9, 2017
44
17
Here is how you translate mtool's ManualTransFile.json with sugoi and translator++

1. Copy ManualTransFile into a temporary folder. Optionally, open it with visual studio code or other text editor that supports regex and search/replace ^.*"[ -~]+": .*\n This will delete lines that are in ascii to make things easier. If you do this check last line doesn't end with ,

2. Open up Translator++, select new project and scroll down and select custom parser option. Click on create a new model chose json as file group, choose regular expression and paste in /(\".*\": "(.*)")/gm or /(\"[^\x00-\x7F]+\": "(.*)")/gm Second one will only select lines starting with non-ascii characters so it may skip some lines with mixed characters.
Enter 2 in capture groups.
Save the file and close the editor.

3. Load the parser file you just created and select temporary folder where ManualTransFile is. Let translator do its thing.

4. Click checkbox next to ManualTransFile. Click ++ button in upper right of translator++ window, select tools/sugoi translator server/local server manager. Use your own judgement to configure it then start it.

5. Go again to ++ menu, automation/batch translation. Select sugoi translator, configure rest as you wish. I recommend enabling save project on each batch since next step may cause lockups or crashing depending on your rig. Click translate now and wait for process to finish.

6. Now you can clean up the translation or just click on inject button to apply the translation. Copy the new ManualTransFile back into game directory and load it with MTool.

If there are errors you can open the file in visual studio code or similar and it will highlight the errors.
 
Last edited: