菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-05-05
📄 Abstract - FIBER: A Differentially Private Optimizer with Filter-Aware Innovation Bias Correction

Differentially private (DP) training protects individual examples by adding noise to gradients, but the injected noise interacts nontrivially with adaptive optimizers. Recent DP methods temporally filter privatized gradients to reduce variance; however, filtering also changes the DP noise statistics seen by AdamW's second-moment accumulator. As a result, bias corrections derived for unfiltered DP noise, such as subtracting sigma_w squared, can become miscalibrated when filtering is present. We propose FiBeR, a DP optimizer designed for temporally filtered privatized gradients. FiBeR (i) performs denoising in innovation space by filtering the residual stream and integrating it to form the filtered gradient estimate, (ii) decouples the two-point observation geometry from the innovation gain to enable independent tuning, and (iii) introduces a filter-aware second-moment calibration that subtracts the attenuated DP noise contribution A(omega) sigma_w squared, where A(omega) is derived in closed form for the innovation filter and can be computed for general stable linear filters. Across vision and language benchmarks, FiBeR consistently demonstrates substantial improvements in the performance of DP optimizers, surpassing state-of-the-art results under equivalent privacy constraints on multiple tasks.

顶级标签: machine learning model training
详细标签: differential privacy adaptive optimizer bias correction innovation filter gradient denoising 或 搜索:

FIBER:一种具有滤波感知创新偏差校正的差分隐私优化器 / FIBER: A Differentially Private Optimizer with Filter-Aware Innovation Bias Correction


1️⃣ 一句话总结

这篇论文提出了一种名为FiBeR的差分隐私优化算法,它能更准确地校准由于梯度滤波引入的噪声偏差,从而在保护数据隐私的同时,显著提升视觉和语言任务的模型性能,超越了当前最先进的方法。

源自 arXiv: 2605.03425