菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-04
📄 Abstract - FINEST: Improving LLM Responses to Sensitive Topics Through Fine-Grained Evaluation

Large Language Models (LLMs) often generate overly cautious and vague responses on sensitive topics, sacrificing helpfulness for safety. Existing evaluation frameworks lack systematic methods to identify and address specific weaknesses in responses to sensitive topics, making it difficult to improve both safety and helpfulness simultaneously. To address this, we introduce FINEST, a FINE-grained response evaluation taxonomy for Sensitive Topics, which breaks down helpfulness and harmlessness into errors across three main categories: Content, Logic, and Appropriateness. Experiments on a Korean-sensitive question dataset demonstrate that our score- and error-based improvement pipeline, guided by FINEST, significantly improves the model responses across all three categories, outperforming refinement without guidance. Notably, score-based improvement -- providing category-specific scores and justifications -- yields the most significant gains, reducing the error sentence ratio for Appropriateness by up to 33.09%. This work lays the foundation for a more explainable and comprehensive evaluation and improvement of LLM responses to sensitive questions.

顶级标签: llm model evaluation natural language processing
详细标签: sensitive topics response evaluation fine-grained taxonomy safety-helpfulness trade-off evaluation framework 或 搜索:

FINEST:通过细粒度评估改进大语言模型对敏感话题的回应 / FINEST: Improving LLM Responses to Sensitive Topics Through Fine-Grained Evaluation


1️⃣ 一句话总结

这篇论文提出了一个名为FINEST的细粒度评估框架,通过将敏感话题的回复质量分解为内容、逻辑和得体性三个维度的具体错误,指导大语言模型进行针对性改进,从而在保持安全性的同时显著提升回答的有用性。

源自 arXiv: 2603.04123