Doc-PP:面向大型视觉语言模型的文档策略保持基准 / Doc-PP: Document Policy Preservation Benchmark for Large Vision-Language Models
1️⃣ 一句话总结
这篇论文提出了一个名为Doc-PP的新基准,用于测试大型视觉语言模型在处理包含敏感信息的复杂文档时能否遵守保密策略,并发现模型在需要跨模态推理时容易泄露信息,为此提出了一个分解、验证、聚合的框架来提升安全性。
The deployment of Large Vision-Language Models (LVLMs) for real-world document question answering is often constrained by dynamic, user-defined policies that dictate information disclosure based on context. While ensuring adherence to these explicit constraints is critical, existing safety research primarily focuses on implicit social norms or text-only settings, overlooking the complexities of multimodal documents. In this paper, we introduce Doc-PP (Document Policy Preservation Benchmark), a novel benchmark constructed from real-world reports requiring reasoning across heterogeneous visual and textual elements under strict non-disclosure policies. Our evaluation highlights a systemic Reasoning-Induced Safety Gap: models frequently leak sensitive information when answers must be inferred through complex synthesis or aggregated across modalities, effectively circumventing existing safety constraints. Furthermore, we identify that providing extracted text improves perception but inadvertently facilitates leakage. To address these vulnerabilities, we propose DVA (Decompose-Verify-Aggregation), a structural inference framework that decouples reasoning from policy verification. Experimental results demonstrate that DVA significantly outperforms standard prompting defenses, offering a robust baseline for policy-compliant document understanding
Doc-PP:面向大型视觉语言模型的文档策略保持基准 / Doc-PP: Document Policy Preservation Benchmark for Large Vision-Language Models
这篇论文提出了一个名为Doc-PP的新基准,用于测试大型视觉语言模型在处理包含敏感信息的复杂文档时能否遵守保密策略,并发现模型在需要跨模态推理时容易泄露信息,为此提出了一个分解、验证、聚合的框架来提升安全性。
源自 arXiv: 2601.03926