用于抑制依赖关系的最近邻密度估计方法 / Nearest-Neighbor Density Estimation for Dependency Suppression
1️⃣ 一句话总结
这篇论文提出了一种新的无监督学习方法,通过结合变分自编码器和最近邻密度估计技术,能够有效消除数据中对敏感变量(如性别、种族)的依赖,同时最大程度保留其他有用信息,在公平性、鲁棒学习和隐私保护等领域具有应用潜力。
The ability to remove unwanted dependencies from data is crucial in various domains, including fairness, robust learning, and privacy protection. In this work, we propose an encoder-based approach that learns a representation independent of a sensitive variable but otherwise preserving essential data characteristics. Unlike existing methods that rely on decorrelation or adversarial learning, our approach explicitly estimates and modifies the data distribution to neutralize statistical dependencies. To achieve this, we combine a specialized variational autoencoder with a novel loss function driven by non-parametric nearest-neighbor density estimation, enabling direct optimization of independence. We evaluate our approach on multiple datasets, demonstrating that it can outperform existing unsupervised techniques and even rival supervised methods in balancing information removal and utility.
用于抑制依赖关系的最近邻密度估计方法 / Nearest-Neighbor Density Estimation for Dependency Suppression
这篇论文提出了一种新的无监督学习方法,通过结合变分自编码器和最近邻密度估计技术,能够有效消除数据中对敏感变量(如性别、种族)的依赖,同时最大程度保留其他有用信息,在公平性、鲁棒学习和隐私保护等领域具有应用潜力。
源自 arXiv: 2603.04224