TranslateGemma技术报告 / TranslateGemma Technical Report
1️⃣ 一句话总结
这篇论文介绍了一个名为TranslateGemma的开源机器翻译模型系列,它通过两阶段微调显著提升了基础模型Gemma 3的翻译能力,在多项评测中表现优异,且小模型能达到大模型的性能,同时保持了强大的多模态能力。
We present TranslateGemma, a suite of open machine translation models based on the Gemma 3 foundation models. To enhance the inherent multilingual capabilities of Gemma 3 for the translation task, we employ a two-stage fine-tuning process. First, supervised fine-tuning is performed using a rich mixture of high-quality large-scale synthetic parallel data generated via state-of-the-art models and human-translated parallel data. This is followed by a reinforcement learning phase, where we optimize translation quality using an ensemble of reward models, including MetricX-QE and AutoMQM, targeting translation quality. We demonstrate the effectiveness of TranslateGemma with human evaluation on the WMT25 test set across 10 language pairs and with automatic evaluation on the WMT24++ benchmark across 55 language pairs. Automatic metrics show consistent and substantial gains over the baseline Gemma 3 models across all sizes. Notably, smaller TranslateGemma models often achieve performance comparable to larger baseline models, offering improved efficiency. We also show that TranslateGemma models retain strong multimodal capabilities, with enhanced performance on the Vistra image translation benchmark. The release of the open TranslateGemma models aims to provide the research community with powerful and adaptable tools for machine translation.
TranslateGemma技术报告 / TranslateGemma Technical Report
这篇论文介绍了一个名为TranslateGemma的开源机器翻译模型系列,它通过两阶段微调显著提升了基础模型Gemma 3的翻译能力,在多项评测中表现优异,且小模型能达到大模型的性能,同时保持了强大的多模态能力。
源自 arXiv: 2601.09012