菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-26
📄 Abstract - The Limits of AI Data Transparency Policy: Three Disclosure Fallacies

Data transparency has emerged as a rallying cry for addressing concerns about AI: data quality, privacy, and copyright chief among them. Yet while these calls are crucial for accountability, current transparency policies often fall short of their intended aims. Similar to nutrition facts for food, policies aimed at nutrition facts for AI currently suffer from a limited consideration of research on effective disclosures. We offer an institutional perspective and identify three common fallacies in policy implementations of data disclosures for AI. First, many data transparency proposals exhibit a specification gap between the stated goals of data transparency and the actual disclosures necessary to achieve such goals. Second, reform attempts exhibit an enforcement gap between required disclosures on paper and enforcement to ensure compliance in fact. Third, policy proposals manifest an impact gap between disclosed information and meaningful changes in developer practices and public understanding. Informed by the social science on transparency, our analysis identifies affirmative paths for transparency that are effective rather than merely symbolic.

顶级标签: ai data policy
详细标签: transparency policy disclosure fallacies ai governance data quality regulatory gaps 或 搜索:

人工智能数据透明度政策的局限:三种披露谬误 / The Limits of AI Data Transparency Policy: Three Disclosure Fallacies


1️⃣ 一句话总结

这篇论文指出,当前旨在通过数据披露来提升AI问责的透明度政策存在三大常见谬误——目标与手段脱节、执行不力以及影响有限,并基于社会科学研究提出了让透明度真正有效而非流于形式的改进路径。

源自 arXiv: 2601.18127