神经网络优化策略与损失函数地形 / Neural network optimization strategies and the topography of the loss landscape
1️⃣ 一句话总结
这篇论文通过对比随机梯度下降和拟牛顿法两种优化算法,发现它们会在神经网络的损失函数地形中找到不同类型的解:随机梯度下降倾向于找到更平坦、泛化性更好的区域,而拟牛顿法则能找到更深但更孤立、泛化性较差的极小值点,从而揭示了优化策略的选择对模型鲁棒性和可迁移性的根本影响。
Neural networks are trained by optimizing multi-dimensional sets of fitting parameters on non-convex loss landscapes. Low-loss regions of the landscapes correspond to the parameter sets that perform well on the training data. A key issue in machine learning is the performance of trained neural networks on previously unseen test data. Here, we investigate neural network training by stochastic gradient descent (SGD) - a non-convex global optimization algorithm which relies only on the gradient of the objective function. We contrast SGD solutions with those obtained via a non-stochastic quasi-Newton method, which utilizes curvature information to determine step direction and Golden Section Search to choose step size. We use several computational tools to investigate neural network parameters obtained by these two optimization methods, including kernel Principal Component Analysis and a novel, general-purpose algorithm for finding low-height paths between pairs of points on loss or energy landscapes, FourierPathFinder. We find that the choice of the optimizer profoundly affects the nature of the resulting solutions. SGD solutions tend to be separated by lower barriers than quasi-Newton solutions, even if both sets of solutions are regularized by early stopping to ensure adequate performance on test data. When allowed to fit extensively on the training data, quasi-Newton solutions occupy deeper minima on the loss landscapes that are not reached by SGD. These solutions are less generalizable to the test data however. Overall, SGD explores smooth basins of attraction, while quasi-Newton optimization is capable of finding deeper, more isolated minima that are more spread out in the parameter space. Our findings help understand both the topography of the loss landscapes and the fundamental role of landscape exploration strategies in creating robust, transferrable neural network models.
神经网络优化策略与损失函数地形 / Neural network optimization strategies and the topography of the loss landscape
这篇论文通过对比随机梯度下降和拟牛顿法两种优化算法,发现它们会在神经网络的损失函数地形中找到不同类型的解:随机梯度下降倾向于找到更平坦、泛化性更好的区域,而拟牛顿法则能找到更深但更孤立、泛化性较差的极小值点,从而揭示了优化策略的选择对模型鲁棒性和可迁移性的根本影响。
源自 arXiv: 2602.21276