菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-06
📄 Abstract - Beyond LLM-as-a-Judge: Deterministic Metrics for Multilingual Generative Text Evaluation

While Large Language Models (LLMs) are increasingly adopted as automated judges for evaluating generated text, their outputs are often costly, and highly sensitive to prompt design, language, and aggregation strategies, severely, which limits reproducibility. To address these challenges, we propose \textbf{\textit{OmniScore}}, a family of complementary, deterministic learned metrics developed using small size ($<$1B) parameter models. OmniScore approximates LLM-judge behavior while preserving the low latency and consistency of traditional model-based scoring. We trained the models large-scale synthetic supervision ($\sim$564k instances, in \textbf{107 languages}) and evaluated using 8,617 manually annotated instances. The OmniScore family supports reliable, multi-dimensional scores across a variety of settings, including reference-based, source-grounded, and hybrid evaluations. We evaluate these models across question answering (QA), translation, and summarization in \textbf{6 languages}. Our results demonstrate that lightweight, deterministic learned metrics provide a highly practical and scalable alternative to frontier LLMs. Our models and datasets can be found at this https URL

顶级标签: llm model evaluation natural language processing
详细标签: multilingual evaluation learned metrics text generation deterministic scoring benchmark 或 搜索:

超越LLM作为评判者:用于多语言生成文本评估的确定性指标 / Beyond LLM-as-a-Judge: Deterministic Metrics for Multilingual Generative Text Evaluation


1️⃣ 一句话总结

这篇论文提出了一个名为OmniScore的确定性评估指标家族,它使用小型模型来模拟大型语言模型的评判能力,从而以低成本、高一致性的方式,为多语言文本生成任务提供可靠的多维度自动评分。

源自 arXiv: 2604.05083