内容模糊测试:用于突破数字社交媒体上的信息茧房 / Content Fuzzing for Escaping Information Cocoons on Digital Social Media
1️⃣ 一句话总结
这篇论文提出了一个名为ContentFuzz的框架,它利用大语言模型在保持原意的前提下改写社交媒体帖子,从而改变机器对其立场的判断,帮助不同观点的内容突破算法推荐形成的信息茧房,接触到更多不同意见的受众。
Information cocoons on social media limit users' exposure to posts with diverse viewpoints. Modern platforms use stance detection as an important signal in recommendation and ranking pipelines, which can route posts primarily to like-minded audiences and reduce cross-cutting exposure. This restricts the reach of dissenting opinions and hinders constructive discourse. We take the creator's perspective and investigate how content can be revised to reach beyond existing affinity clusters. We present ContentFuzz, a confidence-guided fuzzing framework that rewrites posts while preserving their human-interpreted intent and induces different machine-inferred stance labels. ContentFuzz aims to route posts beyond their original cocoons. Our method guides a large language model (LLM) to generate meaning-preserving rewrites using confidence feedback from stance detection models. Evaluated on four representative stance detection models across three datasets in two languages, ContentFuzz effectively changes machine-classified stance labels, while maintaining semantic integrity with respect to the original content.
内容模糊测试:用于突破数字社交媒体上的信息茧房 / Content Fuzzing for Escaping Information Cocoons on Digital Social Media
这篇论文提出了一个名为ContentFuzz的框架,它利用大语言模型在保持原意的前提下改写社交媒体帖子,从而改变机器对其立场的判断,帮助不同观点的内容突破算法推荐形成的信息茧房,接触到更多不同意见的受众。
源自 arXiv: 2604.05461