审视与解决大语言模型生成想法多样性不足的障碍 / Examining and Addressing Barriers to Diversity in LLM-Generated Ideas
1️⃣ 一句话总结
这篇论文发现大语言模型生成的想法比人类群体更单一,并基于认知心理学提出了两种针对性提示策略——思维链提示和普通人物设定,来有效提升其想法多样性,甚至能超越人类水平。
Ideas generated by independent samples of humans tend to be more diverse than ideas generated from independent LLM samples, raising concerns that widespread reliance on LLMs could homogenize ideation and undermine innovation at a societal level. Drawing on cognitive psychology, we identify (both theoretically and empirically) two mechanisms undermining LLM idea diversity. First, at the individual level, LLMs exhibit fixation just as humans do, where early outputs constrain subsequent ideation. Second, at the collective level, LLMs aggregate knowledge into a unified distribution rather than exhibiting the knowledge partitioning inherent to human populations, where each person occupies a distinct region of the knowledge space. Through four studies, we demonstrate that targeted prompting interventions can address each mechanism independently: Chain-of-Thought (CoT) prompting reduces fixation by encouraging structured reasoning (only in LLMs, not humans), while ordinary personas (versus "creative entrepreneurs" such as Steve Jobs) improve knowledge partitioning by serving as diverse sampling cues, anchoring generation in distinct regions of the semantic space. Combining both approaches produces the highest idea diversity, outperforming humans. These findings offer a theoretically grounded framework for understanding LLM idea diversity and practical strategies for human-AI collaborations that leverage AI's efficiency without compromising the diversity essential to a healthy innovation ecosystem.
审视与解决大语言模型生成想法多样性不足的障碍 / Examining and Addressing Barriers to Diversity in LLM-Generated Ideas
这篇论文发现大语言模型生成的想法比人类群体更单一,并基于认知心理学提出了两种针对性提示策略——思维链提示和普通人物设定,来有效提升其想法多样性,甚至能超越人类水平。
源自 arXiv: 2602.20408