菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-07
📄 Abstract - High Fidelity Textual User Representation over Heterogeneous Sources via Reinforcement Learning

Effective personalization on large-scale job platforms requires modeling members based on heterogeneous textual sources, including profiles, professional data, and search activity logs. As recommender systems increasingly adopt Large Language Models (LLMs), creating unified, interpretable, and concise representations from heterogeneous sources becomes critical, especially for latency-sensitive online environments. In this work, we propose a novel Reinforcement Learning (RL) framework to synthesize a unified textual representation for each member. Our approach leverages implicit user engagement signals (e.g., clicks, applies) as the primary reward to distill salient information. Additionally, the framework is complemented by rule-based rewards that enforce formatting and length constraints. Extensive offline experiments across multiple LinkedIn products, one of the world's largest job platforms, demonstrate significant improvements in key downstream business metrics. This work provides a practical, labeling-free, and scalable solution for constructing interpretable user representations that are directly compatible with LLM-based systems.

顶级标签: llm reinforcement learning natural language processing
详细标签: user representation personalization recommender systems text distillation latency-sensitive 或 搜索:

基于强化学习的异构文本用户高保真表征方法 / High Fidelity Textual User Representation over Heterogeneous Sources via Reinforcement Learning


1️⃣ 一句话总结

这篇论文提出了一种利用强化学习框架,从用户档案、职业数据和搜索日志等多种文本信息中,自动提炼并合成一个统一、简洁且可解释的用户文本表征的方法,该方法无需人工标注,并能有效提升大型招聘平台推荐系统的业务指标。

源自 arXiv: 2602.07333