📄
Abstract - Does it Really Count? Assessing Semantic Grounding in Text-Guided Class-Agnostic Counting
Open-world text-guided class-agnostic counting (CAC) has emerged as a flexible paradigm for counting arbitrary object classes by using natural language prompts. However, current evaluation protocols primarily focus on standard counting errors within single-category images, overlooking a fundamental requirement: the ability to correctly ground the textual prompt in the visual scene. In this paper, we show that several state-of-the-art CAC models often struggle to determine which object class should be counted based on the given prompt, revealing a misalignment between textual semantics and visual object representations. This limitation leads to spurious counting responses and reduced reliability in real-world scenarios. To systematically address these limitations, we propose a new evaluation framework focused on model robustness and trustworthiness. Our contribution is two-fold: (i) we introduce PrACo++ (Prompt-Aware Counting++), a novel test suite featuring two dedicated evaluation protocols -- the negative-label test and the distractor test -- paired with new specialized metrics; and (ii) we present the MUCCA (MUlti-Category Class-Agnostic counting) evaluation dataset, a new collection of real-world images featuring multiple annotated object categories per scene, unlike existing CAC benchmarks that typically include a single category per image. Our extensive experimental evaluation of 10 state-of-the-art methods shows that, despite strong performance under standard counting metrics, current models exhibit significant weaknesses in understanding and grounding object class descriptions. Finally, we provide a quantitative analysis of how semantic similarity between prompts influences these failures. Overall, our results underscore the need for more semantically grounded architectures and offer a reliable framework for future assessment in open-world text-guided CAC methods.
这真的算数吗?评估文本引导的类别无关计数中的语义基础 /
Does it Really Count? Assessing Semantic Grounding in Text-Guided Class-Agnostic Counting
1️⃣ 一句话总结
本文揭示了当前文本引导的类别无关计数模型在理解自然语言提示与视觉场景对应关系方面的严重缺陷,并提出了全新评估框架(包括测试套件PrACo++和数据集MUCCA),证明即使在标准计数指标上表现优异,这些模型也常常无法正确判断“该数什么”,从而降低了实际应用的可靠性。