菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-18
📄 Abstract - Differentially Private Non-convex Distributionally Robust Optimization

Real-world deployments routinely face distribution shifts, group imbalances, and adversarial perturbations, under which the traditional Empirical Risk Minimization (ERM) framework can degrade severely. Distributionally Robust Optimization (DRO) addresses this issue by optimizing the worst-case expected loss over an uncertainty set of distributions, offering a principled approach to robustness. Meanwhile, as training data in DRO always involves sensitive information, safeguarding it against leakage under Differential Privacy (DP) is essential. In contrast to classical DP-ERM, DP-DRO has received much less attention due to its minimax optimization structure with uncertainty constraint. To bridge the gap, we provide a comprehensive study of DP-(finite-sum)-DRO with $\psi$-divergence and non-convex loss. First, we study DRO with general $\psi$-divergence by reformulating it as a minimization problem, and develop a novel $(\varepsilon, \delta)$-DP optimization method, called DP Double-Spider, tailored to this structure. Under mild assumptions, we show that it achieves a utility bound of $\mathcal{O}(\frac{1}{\sqrt{n}}+ (\frac{\sqrt{d \log (1/\delta)}}{n \varepsilon})^{2/3})$ in terms of the gradient norm, where $n$ denotes the data size and $d$ denotes the model dimension. We further improve the utility rate for specific divergences. In particular, for DP-DRO with KL-divergence, by transforming the problem into a compositional finite-sum optimization problem, we develop a DP Recursive-Spider method and show that it achieves a utility bound of $\mathcal{O}((\frac{\sqrt{d \log(1/\delta)}}{n\varepsilon})^{2/3} )$, matching the best-known result for non-convex DP-ERM. Experimentally, we demonstrate that our proposed methods outperform existing approaches for DP minimax optimization.

顶级标签: machine learning theory model training
详细标签: differential privacy distributionally robust optimization non-convex optimization privacy-preserving machine learning minimax optimization 或 搜索:

差分隐私非凸分布鲁棒优化 / Differentially Private Non-convex Distributionally Robust Optimization


1️⃣ 一句话总结

这篇论文提出了一种新的差分隐私优化方法,用于训练能在数据分布变化时保持性能稳定的机器学习模型,同时严格保护训练数据中的敏感信息不被泄露。

源自 arXiv: 2602.16155