📄
Abstract - Deep Learning as Neural Low-Degree Filtering: A Spectral Theory of Hierarchical Feature Learning
Understanding how deep neural networks learn useful internal representations from data remains a central open problem in the theory of deep learning. We introduce Neural Low-Degree Filtering (Neural LoFi), a stylized limit of gradient-based training in which hierarchical feature learning becomes an explicit iterative spectral procedure. In this limit, the dynamics at each layer decouple: given the current representation, the next layer selects directions with maximal accessible low-degree correlation to the label. This yields a tractable surrogate mechanism for deep learning, together with a natural kernel-space interpretation. Neural LoFi provides a mathematically explicit framework for studying multi-layer feature learning beyond the lazy regime. It predicts how representations are selected layer by layer, explains how emergence of concepts arises with given sample complexity,and gives a concrete mechanism by which depth progressively constructs new features from old ones through low-degree compositionality. We complement the theory with mechanistic experiments on fully connected and convolutional architectures, showing that Neural LoFi improves over lazy random-feature baselines, recovers meaningful structured filters, and predicts representations aligned with early gradient-descent feature discovery with real datasets.
深度学习作为神经低度滤波:层次化特征学习的谱理论 /
Deep Learning as Neural Low-Degree Filtering: A Spectral Theory of Hierarchical Feature Learning
1️⃣ 一句话总结
本文提出了一种称为“神经低度滤波”(Neural LoFi)的理论框架,将深度神经网络的梯度训练过程简化为逐层提取与标签相关性最强的低阶特征的迭代谱方法,从而解释了深度网络如何逐步构建层次化特征,并实现了比传统随机特征方法更优的性能。