菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-04
📄 Abstract - When Models Know More Than They Say: Probing Analogical Reasoning in LLMs

Analogical reasoning is a core cognitive faculty essential for narrative understanding. While LLMs perform well when surface and structural cues align, they struggle in cases where an analogy is not apparent on the surface but requires latent information, suggesting limitations in abstraction and generalisation. In this paper we compare a model's probed representations with its prompted performance at detecting narrative analogies, revealing an asymmetry: for rhetorical analogies, probing significantly outperforms prompting in open-source models, while for narrative analogies, they achieve a similar (low) performance. This suggests that the relationship between internal representations and prompted behavior is task-dependent and may reflect limitations in how prompting accesses available information.

顶级标签: llm natural language processing model evaluation
详细标签: analogical reasoning probing internal representations narrative understanding abstraction 或 搜索:

当模型所知多于所言:探究大语言模型中的类比推理能力 / When Models Know More Than They Say: Probing Analogical Reasoning in LLMs


1️⃣ 一句话总结

这篇论文通过比较大语言模型内部表征与提示回答的表现,发现模型在某些类比推理任务中,其内部实际掌握的信息远超其通过常规提示所能表达出来的内容,揭示了模型信息提取能力与任务类型密切相关。

源自 arXiv: 2604.03877