宇宙路由:为何自我进化的智能体需要认知控制 / Universe Routing: Why Self-Evolving Agents Need Epistemic Control
1️⃣ 一句话总结
这篇论文提出,为了让智能体能够终身可靠地学习和决策,关键在于建立一个‘认知控制层’,它能像交通指挥一样,根据问题的本质智能地选择最合适的推理框架(如频率学派或贝叶斯学派),从而避免因混用不兼容的推理方法而导致系统性错误。
A critical failure mode of current lifelong agents is not lack of knowledge, but the inability to decide how to reason. When an agent encounters "Is this coin fair?" it must recognize whether to invoke frequentist hypothesis testing or Bayesian posterior inference - frameworks that are epistemologically incompatible. Mixing them produces not minor errors, but structural failures that propagate across decision chains. We formalize this as the universe routing problem: classifying questions into mutually exclusive belief spaces before invoking specialized solvers. Our key findings challenge conventional assumptions: (1) hard routing to heterogeneous solvers matches soft MoE accuracy while being 7x faster because epistemically incompatible frameworks cannot be meaningfully averaged; (2) a 465M-parameter router achieves a 2.3x smaller generalization gap than keyword-matching baselines, indicating semantic rather than surface-level reasoning; (3) when expanding to new belief spaces, rehearsal-based continual learning achieves zero forgetting, outperforming EWC by 75 percentage points, suggesting that modular epistemic architectures are fundamentally more amenable to lifelong learning than regularization-based approaches. These results point toward a broader architectural principle: reliable self-evolving agents may require an explicit epistemic control layer that governs reasoning framework selection.
宇宙路由:为何自我进化的智能体需要认知控制 / Universe Routing: Why Self-Evolving Agents Need Epistemic Control
这篇论文提出,为了让智能体能够终身可靠地学习和决策,关键在于建立一个‘认知控制层’,它能像交通指挥一样,根据问题的本质智能地选择最合适的推理框架(如频率学派或贝叶斯学派),从而避免因混用不兼容的推理方法而导致系统性错误。
源自 arXiv: 2603.14799