菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-04
📄 Abstract - Revisiting Prompt Sensitivity in Large Language Models for Text Classification: The Role of Prompt Underspecification

Large language models (LLMs) are widely used as zero-shot and few-shot classifiers, where task behaviour is largely controlled through prompting. A growing number of works have observed that LLMs are sensitive to prompt variations, with small changes leading to large changes in performance. However, in many cases, the investigation of sensitivity is performed using underspecified prompts that provide minimal task instructions and weakly constrain the model's output space. In this work, we argue that a significant portion of the observed prompt sensitivity can be attributed to prompt underspecification. We systematically study and compare the sensitivity of underspecified prompts and prompts that provide specific instructions. Utilising performance analysis, logit analysis, and linear probing, we find that underspecified prompts exhibit higher performance variance and lower logit values for relevant tokens, while instruction-prompts suffer less from such problems. However, linear probing analysis suggests that the effects of prompt underspecification have only a marginal impact on the internal LLM representations, instead emerging in the final layers. Overall, our findings highlight the need for more rigour when investigating and mitigating prompt sensitivity.

顶级标签: llm natural language processing model evaluation
详细标签: prompt sensitivity prompt engineering text classification zero-shot learning model robustness 或 搜索:

重新审视大语言模型在文本分类中的提示敏感性:论提示未充分指定的作用 / Revisiting Prompt Sensitivity in Large Language Models for Text Classification: The Role of Prompt Underspecification


1️⃣ 一句话总结

这篇论文研究发现,大语言模型在文本分类任务中对提示语的敏感性问题,很大程度上是由于提示语本身定义不明确、指令不具体造成的,而提供清晰、具体的指令可以显著降低这种敏感性并提升模型性能的稳定性。

源自 arXiv: 2602.04297