基于搜索者偏好对齐大语言模型 / Aligning Large Language Models with Searcher Preferences
1️⃣ 一句话总结
这篇论文提出了首个用于开放式生成式搜索的大语言模型SearchLLM,它通过一个分层的多维奖励系统来确保回答的准确性、安全性和对用户需求的匹配,并在实际部署中显著提升了搜索质量和用户参与度。
The paradigm shift from item-centric ranking to answer-centric synthesis is redefining the role of search engines. While recent industrial progress has applied generative techniques to closed-set item ranking in e-commerce, research and deployment of open-ended generative search on large content platforms remain limited. This setting introduces challenges, including robustness to noisy retrieval, non-negotiable safety guarantees, and alignment with diverse user needs. In this work, we introduce SearchLLM, the first large language model (LLM) for open-ended generative search. We design a hierarchical, multi-dimensional reward system that separates bottom-line constraints, including factual grounding, basic answer quality and format compliance, from behavior optimization objectives that promote robustness to noisy retrieval and alignment with user needs. Concretely, our reward model evaluates responses conditioned on the user query, session history, and retrieved evidence set, combining rule-based checks with human-calibrated LLM judges to produce an interpretable score vector over these dimensions. We introduce a Gated Aggregation Strategy to derive the training reward for optimizing SearchLLM with Group Relative Policy Optimization (GRPO). We deploy SearchLLM in the AI search entry of RedNote. Offline evaluations and online A/B tests show improved generation quality and user engagement, increasing Valid Consumption Rate by 1.03% and reducing Re-search Rate by 2.81%, while upholding strict safety and reliability standards.
基于搜索者偏好对齐大语言模型 / Aligning Large Language Models with Searcher Preferences
这篇论文提出了首个用于开放式生成式搜索的大语言模型SearchLLM,它通过一个分层的多维奖励系统来确保回答的准确性、安全性和对用户需求的匹配,并在实际部署中显著提升了搜索质量和用户参与度。
源自 arXiv: 2603.10473