菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-03
📄 Abstract - On Discriminative vs. Generative classifiers: Rethinking MLLMs for Action Understanding

Multimodal Large Language Models (MLLMs) have advanced open-world action understanding and can be adapted as generative classifiers for closed-set settings by autoregressively generating action labels as text. However, this approach is inefficient, and shared subwords across action labels introduce semantic overlap, leading to ambiguity in generation. In contrast, discriminative classifiers learn task-specific representations with clear decision boundaries, enabling efficient one-step classification without autoregressive decoding. We first compare generative and discriminative classifiers with MLLMs for closed-set action understanding, revealing the superior accuracy and efficiency of the latter. To bridge the performance gap, we design strategies that elevate generative classifiers toward performance comparable with discriminative ones. Furthermore, we show that generative modeling can complement discriminative classifiers, leading to better performance while preserving efficiency. To this end, we propose Generation-Assisted Discriminative~(GAD) classifier for closed-set action understanding. GAD operates only during fine-tuning, preserving full compatibility with MLLM pretraining. Extensive experiments on temporal action understanding benchmarks demonstrate that GAD improves both accuracy and efficiency over generative methods, achieving state-of-the-art results on four tasks across five datasets, including an average 2.5% accuracy gain and 3x faster inference on our largest COIN benchmark.

顶级标签: multi-modal model evaluation machine learning
详细标签: multimodal llms action understanding discriminative classifiers generative classifiers efficiency 或 搜索:

论判别式与生成式分类器:重新思考用于动作理解的多模态大语言模型 / On Discriminative vs. Generative classifiers: Rethinking MLLMs for Action Understanding


1️⃣ 一句话总结

这篇论文发现,在封闭场景的动作理解任务中,基于多模态大语言模型的判别式分类器比生成式分类器更准确高效,并提出了一种仅在微调阶段引入生成式辅助的混合方法,显著提升了模型性能与推理速度。

源自 arXiv: 2603.02546