大语言模型表征中的反问句:一项线性探测研究 / Rhetorical Questions in LLM Representations: A Linear Probing Study
1️⃣ 一句话总结
这项研究发现,大语言模型并非使用单一、通用的内部特征来识别反问句,而是通过多个不同的线性方向来编码,这些方向分别捕捉了基于语篇论证的修辞立场和基于局部句法结构的疑问形式等不同线索。
Rhetorical questions are asked not to seek information but to persuade or signal stance. How large language models internally represent them remains unclear. We analyze rhetorical questions in LLM representations using linear probes on two social-media datasets with different discourse contexts, and find that rhetorical signals emerge early and are most stably captured by last-token representations. Rhetorical questions are linearly separable from information-seeking questions within datasets, and remain detectable under cross-dataset transfer, reaching AUROC around 0.7-0.8. However, we demonstrate that transferability does not simply imply a shared representation. Probes trained on different datasets produce different rankings when applied to the same target corpus, with overlap among the top-ranked instances often below 0.2. Qualitative analysis shows that these divergences correspond to distinct rhetorical phenomena: some probes capture discourse-level rhetorical stance embedded in extended argumentation, while others emphasize localized, syntax-driven interrogative acts. Together, these findings suggest that rhetorical questions in LLM representations are encoded by multiple linear directions emphasizing different cues, rather than a single shared direction.
大语言模型表征中的反问句:一项线性探测研究 / Rhetorical Questions in LLM Representations: A Linear Probing Study
这项研究发现,大语言模型并非使用单一、通用的内部特征来识别反问句,而是通过多个不同的线性方向来编码,这些方向分别捕捉了基于语篇论证的修辞立场和基于局部句法结构的疑问形式等不同线索。
源自 arXiv: 2604.14128