-PLUIE:一种基于大语言模型且可个性化定制的改进评估指标* / -PLUIE: Personalisable metric with Llm Used for Improved Evaluation*
1️⃣ 一句话总结
这篇论文提出了一种名为*-PLUIE的新型评估方法,它通过改进现有技术,在保持低成本的同时,能更准确地评估AI生成文本的质量,并且可以根据不同任务进行个性化调整。
Evaluating the quality of automatically generated text often relies on LLM-as-a-judge (LLM-judge) methods. While effective, these approaches are computationally expensive and require post-processing. To address these limitations, we build upon ParaPLUIE, a perplexity-based LLM-judge metric that estimates confidence over ``Yes/No'' answers without generating text. We introduce *-PLUIE, task specific prompting variants of ParaPLUIE and evaluate their alignment with human judgement. Our experiments show that personalised *-PLUIE achieves stronger correlations with human ratings while maintaining low computational cost.
-PLUIE:一种基于大语言模型且可个性化定制的改进评估指标* / -PLUIE: Personalisable metric with Llm Used for Improved Evaluation*
这篇论文提出了一种名为*-PLUIE的新型评估方法,它通过改进现有技术,在保持低成本的同时,能更准确地评估AI生成文本的质量,并且可以根据不同任务进行个性化调整。
源自 arXiv: 2602.15778