Today, LLMs are Swiss knives and machine translation (MT) one of their tools. Is this the end of MT research? In this talk, I argue that the connection between LLM and MT research is two-way. I present some of our recent work advancing multilingual LLMs, tools to estimate their quality, and how the two can be combined for test-time scaling.
First, I present xCOMET, an open-source learned metric which integrates sentence-level evaluation and error span detection, exhibiting state-of-the-art performance across all types of meta-evaluation (sentence-level, system-level, and error span detection). Moreover, it does so while highlighting and categorizing error spans, thus enriching the quality assessment.
Then, I present Tower, a suite of open multilingual LLMs for translation-related tasks. Tower models are created through continued pretraining on a carefully curated multilingual mixture of monolingual and parallel data. The combination of Tower with COMET reranking obtained the best results in 8 out of 11 language pairs in the WMT General Translation shared task, according to human evaluation.
Finally, I describe EuroLLM, an ongoing EU-made project whose goal is to train an open multilingual LLM from scratch using the European HPC infrastructure (EuroHPC). The last release (EuroLLM-9B) supports 35 languages, including all 24 official EU languages, and it achieves strong results in various benchmarks, comparable or better than the best existing models of similar size.