菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-08
📄 Abstract - Scaling Test-Time Robustness of Vision-Language Models via Self-Critical Inference Framework

The emergence of Large Language Models (LLMs) has driven rapid progress in multi-modal learning, particularly in the development of Large Vision-Language Models (LVLMs). However, existing LVLM training paradigms place excessive reliance on the LLM component, giving rise to two critical robustness challenges: language bias and language sensitivity. To address both issues simultaneously, we propose a novel Self-Critical Inference (SCI) framework that extends Visual Contrastive Decoding by conducting multi-round counterfactual reasoning through both textual and visual perturbations. This process further introduces a new strategy for improving robustness by scaling the number of counterfactual rounds. Moreover, we also observe that failure cases of LVLMs differ significantly across models, indicating that fixed robustness benchmarks may not be able to capture the true reliability of LVLMs. To this end, we propose the Dynamic Robustness Benchmark (DRBench), a model-specific evaluation framework targeting both language bias and sensitivity issues. Extensive experiments show that SCI consistently outperforms baseline methods on DRBench, and that increasing the number of inference rounds further boosts robustness beyond existing single-step counterfactual reasoning methods.

顶级标签: multi-modal model evaluation computer vision
详细标签: vision-language models test-time robustness counterfactual reasoning benchmarking language bias 或 搜索:

通过自我批判推理框架扩展视觉语言模型的测试时鲁棒性 / Scaling Test-Time Robustness of Vision-Language Models via Self-Critical Inference Framework


1️⃣ 一句话总结

这篇论文提出了一种名为自我批判推理的新框架,通过多轮假设性提问来减少大型视觉语言模型对文字描述的过度依赖和敏感性问题,并引入了一个动态测试标准来更准确地评估不同模型的实际可靠性。

源自 arXiv: 2603.07659