菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-05-05
📄 Abstract - A Hierarchical Sampling Framework for bounding the Generalization Error of Federated Learning

We study expected generalization bounds for the Hierarchical Federated Learning (HFL) setup using Wasserstein distance. We introduce a generalized framework in which data is sampled hierarchically, and we model it with a multi-layered tree structure that induces dependencies among the clients' datasets. We derive generalization bounds in terms of Wasserstein distance under the Lipschitz assumption on the loss function, by applying a supersample construction that allows us to measure the sensitivity of the algorithm to the change of a single node in the sampling tree. By leveraging the FL structure, we recover and strictly imply existing state-of-the-art conditional mutual information (CMI) bounds in the case of bounded losses. We also show that our bound can be applied together with Differential Privacy assumptions, to recover generalization bounds based on algorithmic privacy. To assess the tightness of our bounds, we study the Gaussian Location Model (GLM) and show that we recover the actual asymptotic rate of the generalization error.

顶级标签: machine learning theory
详细标签: federated learning generalization bound wasserstein distance hierarchical sampling differential privacy 或 搜索:

一种用于界定联邦学习泛化误差的分层采样框架 / A Hierarchical Sampling Framework for bounding the Generalization Error of Federated Learning


1️⃣ 一句话总结

本文提出了一个基于分层树状采样结构的新框架,利用Wasserstein距离推导联邦学习的泛化误差上界,不仅改进了现有的条件互信息界限,还能与差分隐私结合使用,并通过高斯位置模型验证了其界限的准确性。

源自 arXiv: 2605.03499