面向异构目标与约束的决策导向联邦学习 / Decision-Focused Federated Learning Under Heterogeneous Objectives and Constraints
1️⃣ 一句话总结
本文提出一种决策导向联邦学习框架,允许各参与方在共享预测模型的同时拥有不同的优化目标和可行域约束,通过分析异构性导致的性能损失并定义联邦决策收益条件,实验表明即使在优化问题差异明显时,直接结合联邦平均和SPO+方法仍能取得良好效果。
We consider what we refer to as {Decision-Focused Federated Learning (DFFL)} framework, i.e., a predict-then-optimize approach employed by a collection of agents, where each agent's predictive model is an input to a downstream linear optimization problem, and no direct exchange of raw data is allowed. Importantly, clients can differ both in objective functions and in feasibility constraints. We build on the well-known SPO+ approach and develop heterogeneity bounds for the SPO+ surrogate loss in this case. This is accomplished by employing a support function representation of the feasible region, separating (i) objective shift via norm distances between the cost vectors and (ii) feasible-set shift via shape distances between the constraint sets. In the case of strongly convex feasible regions, sharper bounds are derived due to the optimizer stability. Building on these results, we define a heuristic local-versus-federated excess risk decision rule which, under SPO+ risk, gives a condition for when federation can be expected to improve decision quality: the heterogeneity penalty must be smaller than the statistical advantage of pooling data. We implement a FedAvg-style DFFL set of experiments on both polyhedral and strongly convex problems and show that federation is broadly robust in the strongly convex setting, while performance in the polyhedral setting degrades primarily with constraint heterogeneity, especially for clients with many samples. In other words, especially for the strongly convex case, an approach following a direct implementation of FedAvg and SPO+ can still yield promising performance even when the downstream optimization problems are noticeably different.
面向异构目标与约束的决策导向联邦学习 / Decision-Focused Federated Learning Under Heterogeneous Objectives and Constraints
本文提出一种决策导向联邦学习框架,允许各参与方在共享预测模型的同时拥有不同的优化目标和可行域约束,通过分析异构性导致的性能损失并定义联邦决策收益条件,实验表明即使在优化问题差异明显时,直接结合联邦平均和SPO+方法仍能取得良好效果。
源自 arXiv: 2604.20031