菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - Anatomy of a Lie: A Multi-Stage Diagnostic Framework for Tracing Hallucinations in Vision-Language Models

Vision-Language Models (VLMs) frequently "hallucinate" - generate plausible yet factually incorrect statements - posing a critical barrier to their trustworthy deployment. In this work, we propose a new paradigm for diagnosing hallucinations, recasting them from static output errors into dynamic pathologies of a model's computational cognition. Our framework is grounded in a normative principle of computational rationality, allowing us to model a VLM's generation as a dynamic cognitive trajectory. We design a suite of information-theoretic probes that project this trajectory onto an interpretable, low-dimensional Cognitive State Space. Our central discovery is a governing principle we term the geometric-information duality: a cognitive trajectory's geometric abnormality within this space is fundamentally equivalent to its high information-theoretic surprisal. Hallucination detection is counts as a geometric anomaly detection problem. Evaluated across diverse settings - from rigorous binary QA (POPE) and comprehensive reasoning (MME) to unconstrained open-ended captioning (MS-COCO) - our framework achieves state-of-the-art performance. Crucially, it operates with high efficiency under weak supervision and remains highly robust even when calibration data is heavily contaminated. This approach enables a causal attribution of failures, mapping observable errors to distinct pathological states: perceptual instability (measured by Perceptual Entropy), logical-causal failure (measured by Inferential Conflict), and decisional ambiguity (measured by Decision Entropy). Ultimately, this opens a path toward building AI systems whose reasoning is transparent, auditable, and diagnosable by design.

顶级标签: multi-modal model evaluation natural language processing
详细标签: hallucination detection vision-language models cognitive trajectory information-theoretic probes diagnostic framework 或 搜索:

谎言的剖析:一个用于追踪视觉语言模型幻觉的多阶段诊断框架 / Anatomy of a Lie: A Multi-Stage Diagnostic Framework for Tracing Hallucinations in Vision-Language Models


1️⃣ 一句话总结

这篇论文提出了一种新方法,将视觉语言模型产生幻觉(即生成看似合理但事实错误的描述)的过程,看作是其内部“计算认知”的动态病理轨迹,并通过一个可解释的“认知状态空间”来检测和归因这些错误,从而让模型的推理过程更透明、可诊断。

源自 arXiv: 2603.15557