基于逐样本裁剪的稳健快速训练方法 / Robust and Fast Training via Per-Sample Clipping
1️⃣ 一句话总结
本文提出一种名为PS-Clip-SGD的梯度估计方法,通过为每个训练样本单独裁剪梯度来提升模型训练的稳健性与速度,在理论上证明了其在非凸优化中的最优收敛性,并在图像分类任务中验证其比传统方法更高效,同时发现将裁剪操作应用于小批量而非整个训练过程能进一步节省计算成本。
We propose a robust gradient estimator based on per-sample gradient clipping and analyze its properties both theoretically and empirically. We show that the resulting method, per-sample clipped SGD (PS-Clip-SGD), achieves optimal in-expectation convergence rates for non-convex optimization problems under heavy-tailed gradient noise. Moreover, we establish high-probability convergence guarantees that match the in-expectation rates up to polylogarithmic factors in the failure probability. We complement our theoretical results with multiple numerical experiments. In particular, we demonstrate that PS-Clip-SGD outperforms both vanilla SGD with momentum and standard gradient clipping when training AlexNet on the CIFAR-100 dataset, even after accounting for the additional computational time caused by per-sample clipping. We also empirically show that, in the presence of gradient accumulation, applying clipping at the mini-batch level can improve training performance while incurring virtually no additional computational cost. This finding is particularly interesting, as it contradicts the common practice of applying clipping only after all accumulation steps have been completed.
基于逐样本裁剪的稳健快速训练方法 / Robust and Fast Training via Per-Sample Clipping
本文提出一种名为PS-Clip-SGD的梯度估计方法,通过为每个训练样本单独裁剪梯度来提升模型训练的稳健性与速度,在理论上证明了其在非凸优化中的最优收敛性,并在图像分类任务中验证其比传统方法更高效,同时发现将裁剪操作应用于小批量而非整个训练过程能进一步节省计算成本。
源自 arXiv: 2605.02701