📄
Abstract - Controlled Langevin Dynamics for Sampling of Feedforward Neural Networks Trained with Minibatches
Sampling the parameter space of artificial neural networks according to a Boltzmann distribution provides insight into the geometry of low-loss solutions and offers an alternative to conventional loss minimization for training. However, exact sampling methods such as hybrid Monte Carlo (hMC), while formally correct, become computationally prohibitive for realistic datasets because they require repeated evaluation of full-batch gradients. We introduce a pseudo-Langevin (pL) dynamics that enables efficient Boltzmann sampling of feed-forward neural networks trained with large datasets by using minibatches in a controlled manner. The method exploits the statistical properties of minibatch gradient noise and adjusts fictitious masses and friction coefficients to ensure that the induced stochastic process samples efficiently the desired equilibrium distribution. We validate numerically the approach by comparing its equilibrium statistics with those obtained from exact hMC sampling. Performance benchmarks demonstrate that, while hMC rapidly becomes inefficient as network size increases, the pL scheme maintains high computational diffusion and scales favorably to networks with over one million parameters. Finally, we show that sampling at intermediate temperatures yields optimal generalization performance, comparable to SGD, without requiring a validation set or early stopping procedure. These results establish controlled minibatch Langevin dynamics as a practical and scalable tool for exploring and exploiting the solution space of large neural networks.
用于小批量训练前馈神经网络采样的受控朗之万动力学 /
Controlled Langevin Dynamics for Sampling of Feedforward Neural Networks Trained with Minibatches
1️⃣ 一句话总结
这篇论文提出了一种使用小批量梯度进行高效采样的伪朗之万动力学方法,它能够像传统精确采样方法一样探索神经网络参数空间中的低损失区域,但计算成本大大降低,可扩展到百万参数级别的网络,并且通过调节采样温度就能找到泛化性能最佳的模型,无需依赖验证集或早停技巧。