用于无监督机器翻译的集成自训练框架 / Ensemble Self-Training for Unsupervised Machine Translation
1️⃣ 一句话总结
这篇论文提出了一种新方法,通过训练多个结构不同的翻译模型并让它们互相学习,有效提升了无监督机器翻译的质量,最终只选用一个最好的模型来保证效率。
We present an ensemble-driven self-training framework for unsupervised neural machine translation (UNMT). Starting from a primary language pair, we train multiple UNMT models that share the same translation task but differ in an auxiliary language, inducing structured diversity across models. We then generate pseudo-translations for the primary pair using token-level ensemble decoding, averaging model predictions in both directions. These ensemble outputs are used as synthetic parallel data to further train each model, allowing the models to improve via shared supervision. At deployment time, we select a single model by validation performance, preserving single-model inference cost. Experiments show statistically significant improvements over single-model UNMT baselines, with mean gains of 1.7 chrF when translating from English and 0.67 chrF when translating into English.
用于无监督机器翻译的集成自训练框架 / Ensemble Self-Training for Unsupervised Machine Translation
这篇论文提出了一种新方法,通过训练多个结构不同的翻译模型并让它们互相学习,有效提升了无监督机器翻译的质量,最终只选用一个最好的模型来保证效率。
源自 arXiv: 2603.17087