DAIT:从视觉语言模型到轻量级分类器的自适应中间教师迁移蒸馏 / DAIT: Distillation from Vision-Language Models to Lightweight Classifiers with Adaptive Intermediate Teacher Transfer
1️⃣ 一句话总结
这篇论文提出了一种名为DAIT的新方法,通过引入一个可学习的‘中间教师’来巧妙地压缩大型视觉语言模型的知识,从而让轻量级的小模型也能高效、准确地完成细粒度图像分类任务。
Large-scale Vision-Language Models (VLMs) encode rich multimodal semantics that are highly beneficial for fine-grained visual categorization (FGVC). However, their prohibitive computational cost hinders practical deployment in resource-constrained environments. Although knowledge distillation contributes to transferring VLMs capacity to lightweight classifiers, conventional distillation mechanisms, which directly transfer from a generic VLM to a compact student, often yield suboptimal results due to severe architectural misalignment and introducing task-irrelevant information. To alleviate this limitation, we propose Distillation with Adaptive Intermediate Teacher transfer (DAIT) in this study, facilitating adaptive knowledge transfer from VLMs to lightweight students. DAIT introduces a trainable intermediate teacher that learns to transfer frozen VLMs representations under explicit supervision from the target fine-grained task. This intermediate teacher adaptively enhances discriminative visual cues, thereby producing compact and task-aligned knowledge that can be reliably distilled into lightweight models. Extensive evaluations on multiple FGVC benchmarks with diverse student architectures demonstrate that our method achieves respective performance gains of 12.63% and 8.34% on FGVC-Aircraft and CUB-200-2011 datasets, establishing DAIT as a principled paradigm for transferring from general-purpose VLMS to deployable fine-grained recognition models.
DAIT:从视觉语言模型到轻量级分类器的自适应中间教师迁移蒸馏 / DAIT: Distillation from Vision-Language Models to Lightweight Classifiers with Adaptive Intermediate Teacher Transfer
这篇论文提出了一种名为DAIT的新方法,通过引入一个可学习的‘中间教师’来巧妙地压缩大型视觉语言模型的知识,从而让轻量级的小模型也能高效、准确地完成细粒度图像分类任务。
源自 arXiv: 2603.15166