VIOLA:通过最少标注实现视频上下文学习 / VIOLA: Towards Video In-Context Learning with Minimal Annotations
1️⃣ 一句话总结
这篇论文提出了一个名为VIOLA的高效框架,它通过结合少量专家标注和大量未标注视频数据,让多模态大语言模型能够在标注成本极低的情况下,快速且鲁棒地适应新的视频任务领域。
Generalizing Multimodal Large Language Models (MLLMs) to novel video domains is essential for real-world deployment but remains challenging due to the scarcity of labeled data. While In-Context Learning (ICL) offers a training-free adaptation path, standard methods rely on large annotated pools, which are often impractical in specialized environments like industrial or surgical settings since they require the experts' annotations. To bridge this gap, we introduce VIOLA (Video In-cOntext Learning with minimal Annotation), a label-efficient framework that synergizes minimal expert supervision with abundant unlabeled data. First, to maximize the efficiency of a strict annotation budget, we propose density-uncertainty-weighted sampling. Unlike standard diversity or uncertainty strategies that risk selecting visual outliers, our method leverages density estimation to identify samples that are simultaneously diverse, representative, and informative. Second, to utilize the remaining unlabeled data without noise propagation, we construct a hybrid pool and introduce confidence-aware retrieval and confidence-aware prompting. These mechanisms explicitly model label reliability, retrieving demonstrations based on a composite score of similarity and confidence while enabling the MLLM to adaptively distinguish between verified ground truths and noisy pseudo-labels. Extensive experiments across nine diverse benchmarks using four MLLMs demonstrate that our framework significantly outperforms various baselines in low-resource settings, achieving robust adaptation with minimal annotation costs.
VIOLA:通过最少标注实现视频上下文学习 / VIOLA: Towards Video In-Context Learning with Minimal Annotations
这篇论文提出了一个名为VIOLA的高效框架,它通过结合少量专家标注和大量未标注视频数据,让多模态大语言模型能够在标注成本极低的情况下,快速且鲁棒地适应新的视频任务领域。
源自 arXiv: 2601.15549