菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-22
📄 Abstract - Implicit Bias and Convergence of Matrix Stochastic Mirror Descent

We investigate Stochastic Mirror Descent (SMD) with matrix parameters and vector-valued predictions, a framework relevant to multi-class classification and matrix completion problems. Focusing on the overparameterized regime, where the total number of parameters exceeds the number of training samples, we prove that SMD with matrix mirror functions $\psi(\cdot)$ converges exponentially to a global interpolator. Furthermore, we generalize classical implicit bias results of vector SMD by demonstrating that the matrix SMD algorithm converges to the unique solution minimizing the Bregman divergence induced by $\psi(\cdot)$ from initialization subject to interpolating the data. These findings reveal how matrix mirror maps dictate inductive bias in high-dimensional, multi-output problems.

顶级标签: machine learning theory model training
详细标签: stochastic mirror descent implicit bias overparameterization matrix completion convergence analysis 或 搜索:

矩阵随机镜像下降的隐式偏差与收敛性 / Implicit Bias and Convergence of Matrix Stochastic Mirror Descent


1️⃣ 一句话总结

这篇论文证明了在参数多于训练样本的过参数化场景下,用于多分类和矩阵补全问题的矩阵随机镜像下降算法会快速收敛到一个全局解,并且这个解由算法选择的特定‘镜像函数’唯一决定,从而揭示了算法在高维多输出问题中如何形成内在偏好。

源自 arXiv: 2602.18997