菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-29
📄 Abstract - Causal Learning with Neural Assemblies

Can Neural Assemblies -- groups of neurons that fire together and strengthen through co-activation -- learn the direction of causal influence between variables? While established as a computationally general substrate for classification, parsing, and planning, neural assemblies have not yet been shown to internalize causal directionality. We demonstrate that the inherent operations of neural assemblies -- projection, local plasticity control, and sparse winner selection -- are sufficient for directional learning. We introduce DIRECT (DIRectional Edge Coupling/Training), a mechanism that co-activates source and target assemblies under an adaptive gain schedule to internalize directed relations. Unlike backpropagation-based methods, DIRECT relies solely on local plasticity, making the resulting causal claims auditable at the mechanism level. Our findings are verified through a dual-readout validation strategy: (i) synaptic-strength asymmetry, measuring the emergent weight gap between forward and reverse links, and (ii) functional propagation overlap, quantifying the reliability of directional signal flow. Across multiple domains, the framework achieves perfect structural recovery under a supervised, known-structure setting. These results establish neural assemblies as an auditable bridge between biologically plausible dynamics and formal causal models, offering an "explainable by design" framework where causal claims are traceable to specific neural winners and synaptic asymmetries.

顶级标签: machine learning neural networks
详细标签: causal learning neural assemblies local plasticity directionality explainability 或 搜索:

基于神经集群的因果学习 / Causal Learning with Neural Assemblies


1️⃣ 一句话总结

本文提出一种名为DIRECT的机制,利用神经集群的固有操作(如投射、局部可塑性和稀疏胜者选择)来学习变量之间的因果方向,而无需依赖反向传播,从而在生物学合理的框架下实现了可审计的因果推断。

源自 arXiv: 2604.26919