答案气泡:AI中介搜索中的信息暴露 / Answer Bubbles: Information Exposure in AI-Mediated Search
1️⃣ 一句话总结
这篇论文研究发现,以AI生成摘要为主的搜索系统(如GPT、Google AI概述)相比传统链接搜索,会通过有偏见的引用来源和减少不确定性表述的语言风格,为用户构建出结构不同的‘信息现实’,可能形成‘答案气泡’,影响信息来源的多样性和用户对信息的信任。
Generative search systems are increasingly replacing link-based retrieval with AI-generated summaries, yet little is known about how these systems differ in sources, language, and fidelity to cited material. We examine responses to 11,000 real search queries across four systems -- vanilla GPT, Search GPT, Google AI Overviews, and traditional Google Search -- at three levels: source diversity, linguistic characterization of the generated summary, and source-summary fidelity. We find that generative search systems exhibit significant \textit{source-selection} biases in their citations, favoring certain sources over others. Incorporating search also selectively attenuates epistemic markers, reducing hedging by up to 60\% while preserving confidence language in the AI-generated summaries. At the same time, AI summaries further compound the citation biases: Wikipedia and longer sources are disproportionately overrepresented, whereas cited social media content and negatively framed sources are substantially underrepresented. Our findings highlight the potential for \textit{answer bubbles}, in which identical queries yield structurally different information realities across systems, with implications for user trust, source visibility, and the transparency of AI-mediated information access.
答案气泡:AI中介搜索中的信息暴露 / Answer Bubbles: Information Exposure in AI-Mediated Search
这篇论文研究发现,以AI生成摘要为主的搜索系统(如GPT、Google AI概述)相比传统链接搜索,会通过有偏见的引用来源和减少不确定性表述的语言风格,为用户构建出结构不同的‘信息现实’,可能形成‘答案气泡’,影响信息来源的多样性和用户对信息的信任。
源自 arXiv: 2603.16138