Kolmogorov-Arnold网络的多级训练方法 / Multilevel Training for Kolmogorov Arnold Networks
1️⃣ 一句话总结
这篇论文提出了一种针对Kolmogorov-Arnold网络(KANs)的多级训练算法,通过利用其独特的数学结构,将训练过程分解为从粗到细的多个层次,从而实现了比传统方法快几个数量级的训练速度提升,尤其在物理信息神经网络等任务上效果显著。
Algorithmic speedup of training common neural architectures is made difficult by the lack of structure guaranteed by the function compositions inherent to such networks. In contrast to multilayer perceptrons (MLPs), Kolmogorov-Arnold networks (KANs) provide more structure by expanding learned activations in a specified basis. This paper exploits this structure to develop practical algorithms and theoretical insights, yielding training speedup via multilevel training for KANs. To do so, we first establish an equivalence between KANs with spline basis functions and multichannel MLPs with power ReLU activations through a linear change of basis. We then analyze how this change of basis affects the geometry of gradient-based optimization with respect to spline knots. The KANs change-of-basis motivates a multilevel training approach, where we train a sequence of KANs naturally defined through a uniform refinement of spline knots with analytic geometric interpolation operators between models. The interpolation scheme enables a ``properly nested hierarchy'' of architectures, ensuring that interpolation to a fine model preserves the progress made on coarse models, while the compact support of spline basis functions ensures complementary optimization on subsequent levels. Numerical experiments demonstrate that our multilevel training approach can achieve orders of magnitude improvement in accuracy over conventional methods to train comparable KANs or MLPs, particularly for physics informed neural networks. Finally, this work demonstrates how principled design of neural networks can lead to exploitable structure, and in this case, multilevel algorithms that can dramatically improve training performance.
Kolmogorov-Arnold网络的多级训练方法 / Multilevel Training for Kolmogorov Arnold Networks
这篇论文提出了一种针对Kolmogorov-Arnold网络(KANs)的多级训练算法,通过利用其独特的数学结构,将训练过程分解为从粗到细的多个层次,从而实现了比传统方法快几个数量级的训练速度提升,尤其在物理信息神经网络等任务上效果显著。
源自 arXiv: 2603.04827