菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-15
📄 Abstract - Reasoning Models Generate Societies of Thought

Large language models have achieved remarkable capabilities across domains, yet mechanisms underlying sophisticated reasoning remain elusive. Recent reasoning models outperform comparable instruction-tuned models on complex cognitive tasks, attributed to extended computation through longer chains of thought. Here we show that enhanced reasoning emerges not from extended computation alone, but from simulating multi-agent-like interactions -- a society of thought -- which enables diversification and debate among internal cognitive perspectives characterized by distinct personality traits and domain expertise. Through quantitative analysis and mechanistic interpretability methods applied to reasoning traces, we find that reasoning models like DeepSeek-R1 and QwQ-32B exhibit much greater perspective diversity than instruction-tuned models, activating broader conflict between heterogeneous personality- and expertise-related features during reasoning. This multi-agent structure manifests in conversational behaviors, including question-answering, perspective shifts, and the reconciliation of conflicting views, and in socio-emotional roles that characterize sharp back-and-forth conversations, together accounting for the accuracy advantage in reasoning tasks. Controlled reinforcement learning experiments reveal that base models increase conversational behaviors when rewarded solely for reasoning accuracy, and fine-tuning models with conversational scaffolding accelerates reasoning improvement over base models. These findings indicate that the social organization of thought enables effective exploration of solution spaces. We suggest that reasoning models establish a computational parallel to collective intelligence in human groups, where diversity enables superior problem-solving when systematically structured, which suggests new opportunities for agent organization to harness the wisdom of crowds.

顶级标签: llm agents theory
详细标签: reasoning models multi-agent simulation collective intelligence mechanistic interpretability chain of thought 或 搜索:

推理模型生成思想社会 / Reasoning Models Generate Societies of Thought


1️⃣ 一句话总结

这篇论文发现,像DeepSeek-R1这样的先进推理模型之所以能出色解决复杂问题,并非仅仅因为更长的思考链条,而是因为它们在内部模拟了一个由不同‘个性’和‘专长’的虚拟角色组成的‘思想社会’,通过角色间的辩论和协作来探索更优的解决方案,这类似于人类群体集思广益的智慧。

源自 arXiv: 2601.10825