联邦学习中的拉什莫尔集合与模型多样性 / Rashomon Sets and Model Multiplicity in Federated Learning
1️⃣ 一句话总结
这篇论文首次将‘拉什莫尔集合’(即性能相近但决策边界不同的模型集合)的概念引入联邦学习,提出了三种新的定义方法,帮助不同用户在不共享数据的情况下,选择更符合自身需求、更公平的模型,而不是盲目采用单一‘最佳’模型。
The Rashomon set captures the collection of models that achieve near-identical empirical performance yet may differ substantially in their decision boundaries. Understanding the differences among these models, i.e., their multiplicity, is recognized as a crucial step toward model transparency, fairness, and robustness, as it reveals decision boundaries instabilities that standard metrics obscure. However, the existing definitions of Rashomon set and multiplicity metrics assume centralized learning and do not extend naturally to decentralized, multi-party settings like Federated Learning (FL). In FL, multiple clients collaboratively train models under a central server's coordination without sharing raw data, which preserves privacy but introduces challenges from heterogeneous client data distribution and communication constraints. In this setting, the choice of a single best model may homogenize predictive behavior across diverse clients, amplify biases, or undermine fairness guarantees. In this work, we provide the first formalization of Rashomon sets in this http URL, we adapt the Rashomon set definition to FL, distinguishing among three perspectives: (I) a global Rashomon set defined over aggregated statistics across all clients, (II) a t-agreement Rashomon set representing the intersection of local Rashomon sets across a fraction t of clients, and (III) individual Rashomon sets specific to each client's local this http URL, we show how standard multiplicity metrics can be estimated under FL's privacy constraints. Finally, we introduce a multiplicity-aware FL pipeline and conduct an empirical study on standard FL benchmark datasets. Our results demonstrate that all three proposed federated Rashomon set definitions offer valuable insights, enabling clients to deploy models that better align with their local data, fairness considerations, and practical requirements.
联邦学习中的拉什莫尔集合与模型多样性 / Rashomon Sets and Model Multiplicity in Federated Learning
这篇论文首次将‘拉什莫尔集合’(即性能相近但决策边界不同的模型集合)的概念引入联邦学习,提出了三种新的定义方法,帮助不同用户在不共享数据的情况下,选择更符合自身需求、更公平的模型,而不是盲目采用单一‘最佳’模型。
源自 arXiv: 2602.09520