While only as an optional package in SLRMTL posts for now, I'm proud to announce that:
Known Issues:
Picture based text not translated.
Will be replaced with:
Known Issues:
The picture based translations are terrible.
I've finally found a way to automate them with yet another offline neural network.
For now I'm just really happy that it works at all and how incredibly fast it is. (I can process 200 pictures ln less than 5 minutes.)
I will obviously try to improve it further and maybe one day have it be the same translation quality as SLR.
Here's an example Input:
Output translated with Google.
Output translated with DeepL
Output translated with Sugoiv4 (raw vanilla, not my SLR stuff)
The main issue for now is that it cannot detect full sentences, it goes line by line instead, which leads to only partial text chunks being fed to the translator.
As a result the translator currently lacks the needed context for a proper translation.
(Japanese is sadly extremely context based. Moonrunes have 500 meanings depending on what's before or after them)
I already have some ideas how to group things before feeding to improve translation quality, but for now I really need to get some rest.
This was a lot harder than it looks...