FedDetox:通过设备端数据净化实现鲁棒的联邦小语言模型对齐 / FedDetox: Robust Federated SLM Alignment via On-Device Data Sanitization
1️⃣ 一句话总结
这篇论文提出了一种名为FedDetox的新方法,它通过在资源有限的手机等设备上提前过滤掉用户数据中的有害信息,从而在保护隐私的联邦学习过程中,有效防止了这些‘毒数据’破坏小语言模型的安全性,同时不影响模型的正常功能。
As high quality public data becomes scarce, Federated Learning (FL) provides a vital pathway to leverage valuable private user data while preserving privacy. However, real-world client data often contains toxic or unsafe information. This leads to a critical issue we define as unintended data poisoning, which can severely damage the safety alignment of global models during federated alignment. To address this, we propose FedDetox, a robust framework tailored for Small Language Models (SLMs) on resource-constrained edge devices. We first employ knowledge distillation to transfer sophisticated safety alignment capabilities from large scale safety aligned teacher models into light weight student classifiers suitable for resource constrained edge devices. Specifically, during federated learning for human preference alignment, the edge client identifies unsafe samples at the source and replaces them with refusal templates, effectively transforming potential poisons into positive safety signals. Experiments demonstrate that our approach preserves model safety at a level comparable to centralized baselines without compromising general utility.
FedDetox:通过设备端数据净化实现鲁棒的联邦小语言模型对齐 / FedDetox: Robust Federated SLM Alignment via On-Device Data Sanitization
这篇论文提出了一种名为FedDetox的新方法,它通过在资源有限的手机等设备上提前过滤掉用户数据中的有害信息,从而在保护隐私的联邦学习过程中,有效防止了这些‘毒数据’破坏小语言模型的安全性,同时不影响模型的正常功能。
源自 arXiv: 2604.06833