Traditional translators are pretty good at looking up single words or phrases, but giving longer pieces of text to work with and they fall apart pretty quick and they can't process the context of it. DeepL is probably the best one right now, or running sugoi locally.
ChatGPT api is pretty good at translating as it can understand context of the text and/or you can provide it to it. I think better paid option would be Claude 3.5 from Antropic at the moment. ChatGPT has its ups and downs as they make it more difficult to work with NSFW stuff.
Local models are absolutely possible, but the better the model the beefier computer you require. Llama 8b or its finetunes can fit in a single RTX 3060 (approx. 8 gigabytes vram + 2 for the model's context.) I think the best local model for translation would be Command R+ from cohere, but at the very least it requires 64gbs of ram or vram or mix of both. Also such large models are very slow. Command R+ yields about half a word a second (0.7tokens/s) on my machine (64gbs ram/3060) which isn't ideal, but small 8b llama can do about 10 words a second (25tokens/s) but is not as good. There are models with sizes in between but their multi-language ability varies.