Lol, the amount of headache I've been having with trying to somehow get CT2 working with a non-CUDA capable GPU lately...
At any rate, it's not that straightforward. The most easily accessible script for the v6.0 toolkit which supposedly gives CUDA and CT2 both (and separately), actually slows down the TL for me (when using only the CT2/cpu part, compared to the vanilla v6.0).
There actually is a working UI mod for v4.0 of the toolkit with actually faster CT2 (as it should be), but good luck trying to make it work with the most recent v6.0 (though I'd reckon that one who is tech-savy, will be able to do it - or at least get the script file/files from it that could be used in the 6.0 too, with some changes).
Also, yeah, that "Aguuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu" shit the translator pulls from time to time is annoying, and it's not just Sugoi-specific (Google just flat out cuts a big chunk of the TL, DeepL does something similar, can give you "Aguuuuuuuuuuuuuuuuuuu" too, can even give you some standard bullshit sentence with absolutely zero connection to the text you are trying to TL). Though it's quite rare in my experience, thankfully. Will try out that edit to the python script suggested, nonetheless; thanks. If it slows things down too much generally, it may not be worth it...
EDIT:
At any rate, it's not that straightforward. The most easily accessible script for the v6.0 toolkit which supposedly gives CUDA and CT2 both (and separately), actually slows down the TL for me (when using only the CT2/cpu part, compared to the vanilla v6.0).
There actually is a working UI mod for v4.0 of the toolkit with actually faster CT2 (as it should be), but good luck trying to make it work with the most recent v6.0 (though I'd reckon that one who is tech-savy, will be able to do it - or at least get the script file/files from it that could be used in the 6.0 too, with some changes).
Also, yeah, that "Aguuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu" shit the translator pulls from time to time is annoying, and it's not just Sugoi-specific (Google just flat out cuts a big chunk of the TL, DeepL does something similar, can give you "Aguuuuuuuuuuuuuuuuuuu" too, can even give you some standard bullshit sentence with absolutely zero connection to the text you are trying to TL). Though it's quite rare in my experience, thankfully. Will try out that edit to the python script suggested, nonetheless; thanks. If it slows things down too much generally, it may not be worth it...
EDIT:
Haven't tried it yet, but the issue I'm seeing immediately is that when you use e.g. the CT2 only option (22), it's not flaskserver.py it's calling...So there is a parameter that fixes the repeat, but it makes the translator 2x slower so you really should combine it with either the CT2 patch or, better, the Nvidia CUDA patch.
As mentioned CT2 speeds up translation speed ~4-5x and CUDA speeds it up 10x.
NOTE: If I understand right, the CT2/CUDA mod already has this installed, as long as you activate from the Sugoi-Translator-Offline-CT2 (click here).bat script.
Edit flaskserver.py in .\Code\backendServer\Program-Backend\Sugoi-Japanese-Translator\offlineTranslation\fairseq
View attachment 2746785