菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-07
📄 Abstract - Reciprocal Trust and Distrust in Artificial Intelligence Systems: The Hard Problem of Regulation

Policy makers, scientists, and the public are increasingly confronted with thorny questions about the regulation of artificial intelligence (AI) systems. A key common thread concerns whether AI can be trusted and the factors that can make it more trustworthy in front of stakeholders and users. This is indeed crucial, as the trustworthiness of AI systems is fundamental for both democratic governance and for the development and deployment of AI. This article advances the discussion by arguing that AI systems should also be recognized, as least to some extent, as artifacts capable of exercising a form of agency, thereby enabling them to engage in relationships of trust or distrust with humans. It further examines the implications of these reciprocal trust dynamics for regulators tasked with overseeing AI systems. The article concludes by identifying key tensions and unresolved dilemmas that these dynamics pose for the future of AI regulation and governance.

顶级标签: systems theory general
详细标签: ai regulation trustworthiness human-ai interaction governance agency 或 搜索:

人工智能系统中的相互信任与不信任:监管的难题 / Reciprocal Trust and Distrust in Artificial Intelligence Systems: The Hard Problem of Regulation


1️⃣ 一句话总结

这篇论文提出,应将人工智能系统视为具有一定自主性的实体,能够与人类建立相互的信任或不信任关系,并探讨了这种动态关系给AI监管带来的核心挑战和未解难题。

源自 arXiv: 2604.05826