菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-10
📄 Abstract - Empowering Contrastive Federated Sequential Recommendation with LLMs

Federated sequential recommendation (FedSeqRec) aims to perform next-item prediction while keeping user data decentralised, yet model quality is frequently constrained by fragmented, noisy, and homogeneous interaction logs stored on individual devices. Many existing approaches attempt to compensate through manual data augmentation or additional server-side constraints, but these strategies either introduce limited semantic diversity or increase system overhead. To overcome these challenges, we propose \textbf{LUMOS}, a parameter-isolated FedSeqRec architecture that integrates large language models (LLMs) as \emph{local semantic generators}. Instead of sharing gradients or auxiliary parameters, LUMOS privately invokes an on-device LLM to construct three complementary sequence variants from each user history: (i) \emph{future-oriented} trajectories that infer plausible behavioural continuations, (ii) \emph{semantically equivalent rephrasings} that retain user intent while diversifying interaction patterns, and (iii) \emph{preference-inconsistent counterfactuals} that serve as informative negatives. These synthesized sequences are jointly encoded within the federated backbone through a tri-view contrastive optimisation scheme, enabling richer representation learning without exposing sensitive information. Experimental results across three public benchmarks show that LUMOS achieves consistent gains over competitive centralised and federated baselines on HR@20 and NDCG@20. In addition, the use of semantically grounded positive signals and counterfactual negatives improves robustness under noisy and adversarial environments, even without dedicated server-side protection modules. Overall, this work demonstrates the potential of LLM-driven semantic generation as a new paradigm for advancing privacy-preserving federated recommendation.

顶级标签: llm model training systems
详细标签: federated learning sequential recommendation contrastive learning data augmentation privacy 或 搜索:

利用大语言模型增强对比式联邦序列推荐 / Empowering Contrastive Federated Sequential Recommendation with LLMs


1️⃣ 一句话总结

这篇论文提出了一种名为LUMOS的新方法,它利用大语言模型在用户设备上安全地生成多样化的虚拟行为序列,通过对比学习提升联邦推荐系统的性能,同时保护用户隐私。

源自 arXiv: 2602.09306