翻译还是背诵?为极低资源语言机器翻译的评估分数进行校准 / Translation or Recitation? Calibrating Evaluation Scores for Machine Translation of Extremely Low-Resource Languages
1️⃣ 一句话总结
这篇论文提出了一套名为FRED的难度度量指标,用于揭示和校准极低资源语言机器翻译评估中因数据泄露和模型预训练偏差导致的分数虚高问题,从而为该领域提供更透明可靠的评估基础。
The landscape of extremely low-resource machine translation (MT) is characterized by perplexing variability in reported performance, often making results across different language pairs difficult to contextualize. For researchers focused on specific language groups -- such as ancient languages -- it is nearly impossible to determine if breakthroughs reported in other contexts (e.g., native African or American languages) result from superior methodologies or are merely artifacts of benchmark collection. To address this problem, we introduce the FRED Difficulty Metrics, which include the Fertility Ratio (F), Retrieval Proxy (R), Pre-training Exposure (E), and Corpus Diversity (D) and serve as dataset-intrinsic metrics to contextualize reported scores. These metrics reveal that a significant portion of result variability is explained by train-test overlap and pre-training exposure rather than model capability. Additionally, we identify that some languages -- particularly extinct and non-Latin indigenous languages -- suffer from poor tokenization coverage (high token fertility), highlighting a fundamental limitation of transferring models from high-resource languages that lack a shared vocabulary. By providing these indices alongside performance scores, we enable more transparent evaluation of cross-lingual transfer and provide a more reliable foundation for the XLR MT community.
翻译还是背诵?为极低资源语言机器翻译的评估分数进行校准 / Translation or Recitation? Calibrating Evaluation Scores for Machine Translation of Extremely Low-Resource Languages
这篇论文提出了一套名为FRED的难度度量指标,用于揭示和校准极低资源语言机器翻译评估中因数据泄露和模型预训练偏差导致的分数虚高问题,从而为该领域提供更透明可靠的评估基础。
源自 arXiv: 2603.25222