菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-10
📄 Abstract - Towards Explainable Federated Learning: Understanding the Impact of Differential Privacy

Data privacy and eXplainable Artificial Intelligence (XAI) are two important aspects for modern Machine Learning systems. To enhance data privacy, recent machine learning models have been designed as a Federated Learning (FL) system. On top of that, additional privacy layers can be added, via Differential Privacy (DP). On the other hand, to improve explainability, ML must consider more interpretable approaches with reduced number of features and less complex internal architecture. In this context, this paper aims to achieve a machine learning (ML) model that combines enhanced data privacy with explainability. So, we propose a FL solution, called Federated EXplainable Trees with Differential Privacy (FEXT-DP), that: (i) is based on Decision Trees, since they are lightweight and have superior explainability than neural networks-based FL systems; (ii) provides additional layer of data privacy protection applying Differential Privacy (DP) to the Tree-Based model. However, there is a side effect adding DP: it harms the explainability of the system. So, this paper also presents the impact of DP protection on the explainability of the ML model. The carried out performance assessment shows improvements of FEXT-DP in terms of a faster training, i.e., numbers of rounds, Mean Squared Error and explainability.

顶级标签: machine learning systems model evaluation
详细标签: federated learning differential privacy explainable ai decision trees privacy-utility tradeoff 或 搜索:

迈向可解释的联邦学习:理解差分隐私的影响 / Towards Explainable Federated Learning: Understanding the Impact of Differential Privacy


1️⃣ 一句话总结

这篇论文提出了一种结合差分隐私与决策树的联邦学习方法,在增强数据隐私保护的同时,也探讨了隐私保护措施对模型可解释性产生的负面影响。

源自 arXiv: 2602.10100