📄 论文总结
ROOT:一种用于神经网络训练的鲁棒正交化优化器 / ROOT: Robust Orthogonalized Optimizer for Neural Network Training
1️⃣ 一句话总结
这篇论文提出了一种名为ROOT的新型优化器,它通过自适应正交化和抗噪机制,有效解决了大模型训练中的不稳定问题,在噪声环境下比现有优化器收敛更快、性能更好。
The optimization of large language models (LLMs) remains a critical challenge, particularly as model scaling exacerbates sensitivity to algorithmic imprecision and training instability. Recent advances in optimizers have improved convergence efficiency through momentum orthogonalization, but suffer from two key robustness limitations: dimensional fragility in orthogonalization precision and vulnerability to outlier-induced noise. To address these robustness challenges, we introduce ROOT, a Robust Orthogonalized Optimizer that enhances training stability through dual robustness mechanisms. First, we develop a dimension-robust orthogonalization scheme using adaptive Newton iterations with fine-grained coefficients tailored to specific matrix sizes, ensuring consistent precision across diverse architectural configurations. Second, we introduce an optimization-robust framework via proximal optimization that suppresses outlier noise while preserving meaningful gradient directions. Extensive experiments demonstrate that ROOT achieves significantly improved robustness, with faster convergence and superior final performance compared to both Muon and Adam-based optimizers, particularly in noisy and non-convex scenarios. Our work establishes a new paradigm for developing robust and precise optimizers capable of handling the complexities of modern large-scale model training. The code will be available at this https URL.
ROOT:一种用于神经网络训练的鲁棒正交化优化器 / ROOT: Robust Orthogonalized Optimizer for Neural Network Training
这篇论文提出了一种名为ROOT的新型优化器,它通过自适应正交化和抗噪机制,有效解决了大模型训练中的不稳定问题,在噪声环境下比现有优化器收敛更快、性能更好。