菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - Muon Converges under Heavy-Tailed Noise: Nonconvex Hölder-Smooth Empirical Risk Minimization

Muon is a recently proposed optimizer that enforces orthogonality in parameter updates by projecting gradients onto the Stiefel manifold, leading to stable and efficient training in large-scale deep neural networks. Meanwhile, the previously reported results indicated that stochastic noise in practical machine learning may exhibit heavy-tailed behavior, violating the bounded-variance assumption. In this paper, we consider the problem of minimizing a nonconvex Hölder-smooth empirical risk that works well with the heavy-tailed stochastic noise. We then show that Muon converges to a stationary point of the empirical risk under the boundedness condition accounting for heavy-tailed stochastic noise. In addition, we show that Muon converges faster than mini-batch SGD.

顶级标签: machine learning model training theory
详细标签: optimization stochastic optimization heavy-tailed noise nonconvex optimization convergence analysis 或 搜索:

Muon优化器在重尾噪声下的收敛性:非凸Hölder平滑经验风险最小化 / Muon Converges under Heavy-Tailed Noise: Nonconvex Hölder-Smooth Empirical Risk Minimization


1️⃣ 一句话总结

这篇论文证明了,即使在训练数据噪声呈现‘重尾分布’(即存在极端异常值)的严苛条件下,一种名为Muon的新型优化算法也能稳定地找到神经网络的有效解,并且其收敛速度比传统的小批量随机梯度下降法更快。

源自 arXiv: 2603.15059