菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-03
📄 Abstract - Credibility Governance: A Social Mechanism for Collective Self-Correction under Weak Truth Signals

Online platforms increasingly rely on opinion aggregation to allocate real-world attention and resources, yet common signals such as engagement votes or capital-weighted commitments are easy to amplify and often track visibility rather than reliability. This makes collective judgments brittle under weak truth signals, noisy or delayed feedback, early popularity surges, and strategic manipulation. We propose Credibility Governance (CG), a mechanism that reallocates influence by learning which agents and viewpoints consistently track evolving public evidence. CG maintains dynamic credibility scores for both agents and opinions, updates opinion influence via credibility-weighted endorsements, and updates agent credibility based on the long-run performance of the opinions they support, rewarding early and persistent alignment with emerging evidence while filtering short-lived noise. We evaluate CG in POLIS, a socio-physical simulation environment that models coupled belief dynamics and downstream feedback under uncertainty. Across settings with initial majority misalignment, observation noise and contamination, and misinformation shocks, CG outperforms vote-based, stake-weighted, and no-governance baselines, yielding faster recovery to the true state, reduced lock-in and path dependence, and improved robustness under adversarial pressure. Our implementation and experimental scripts are publicly available at this https URL.

顶级标签: systems agents theory
详细标签: collective intelligence governance mechanism credibility scoring belief dynamics misinformation resilience 或 搜索:

可信度治理:一种在弱真相信号下进行集体自我纠正的社会机制 / Credibility Governance: A Social Mechanism for Collective Self-Correction under Weak Truth Signals


1️⃣ 一句话总结

这篇论文提出了一种名为‘可信度治理’的新机制,它通过动态评估参与者和观点的可信度来重新分配影响力,帮助在线平台在面对虚假信息、噪声和操纵时,更准确、更稳健地达成集体判断,从而更快地接近真相。

源自 arXiv: 2603.02640