菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-04
📄 Abstract - When AI Fails, What Works? A Data-Driven Taxonomy of Real-World AI Risk Mitigation Strategies

Large language models (LLMs) are increasingly embedded in high-stakes workflows, where failures propagate beyond isolated model errors into systemic breakdowns that can lead to legal exposure, reputational damage, and material financial losses. Building on this shift from model-centric risks to end-to-end system vulnerabilities, we analyze real-world AI incident reporting and mitigation actions to derive an empirically grounded taxonomy that links failure dynamics to actionable interventions. Using a unified corpus of 9,705 media-reported AI incident articles, we extract explicit mitigation actions from 6,893 texts via structured prompting and then systematically classify responses to extend MIT's AI Risk Mitigation Taxonomy. Our taxonomy introduces four new mitigation categories, including 1) Corrective and Restrictive Actions, 2) Legal/Regulatory and Enforcement Actions, 3) Financial, Economic, and Market Controls, and 4) Avoidance and Denial, capturing response patterns that are becoming increasingly prevalent as AI deployment and regulation evolve. Quantitatively, we label the mitigation dataset with 32 distinct labels, producing 23,994 label assignments; 9,629 of these reflect previously unseen mitigation patterns, yielding a 67% increase of the original subcategory coverage and substantially enhancing the taxonomy's applicability to emerging systemic failure modes. By structuring incident responses, the paper strengthens "diagnosis-to-prescription" guidance and advances continuous, taxonomy-aligned post-deployment monitoring to prevent cascading incidents and downstream impact.

顶级标签: llm systems model evaluation
详细标签: risk mitigation incident reporting taxonomy systemic failure post-deployment monitoring 或 搜索:

当AI失败时,什么有效?基于数据驱动的现实世界AI风险缓解策略分类法 / When AI Fails, What Works? A Data-Driven Taxonomy of Real-World AI Risk Mitigation Strategies


1️⃣ 一句话总结

这篇论文通过分析近万条AI事故报道,创建了一个新的、更全面的风险应对策略分类法,帮助人们在AI系统出现故障时,能更快地找到有效的补救措施,防止小错误演变成大问题。

源自 arXiv: 2603.04259