论医学视觉-语言嵌入中的锥体效应与模态鸿沟 / On the Cone Effect and Modality Gap in Medical Vision-Language Embeddings
1️⃣ 一句话总结
这篇论文研究发现,在医学视觉-语言模型中,图像和文本特征之间的差异(模态鸿沟)并非越小越好,通过一个简单方法调节这个差异的大小,可以找到最适合具体医学任务的最佳状态,从而提升模型性能。
Vision-Language Models (VLMs) exhibit a characteristic "cone effect" in which nonlinear encoders map embeddings into highly concentrated regions of the representation space, contributing to cross-modal separation known as the modality gap. While this phenomenon has been widely observed, its practical impact on supervised multimodal learning -particularly in medical domains- remains unclear. In this work, we introduce a lightweight post-hoc mechanism that keeps pretrained VLM encoders frozen while continuously controlling cross-modal separation through a single hyperparameter {\lambda}. This enables systematic analysis of how the modality gap affects downstream multimodal performance without expensive retraining. We evaluate generalist (CLIP, SigLIP) and medically specialized (BioMedCLIP, MedSigLIP) models across diverse medical and natural datasets in a supervised multimodal settings. Results consistently show that reducing excessive modality gap improves downstream performance, with medical datasets exhibiting stronger sensitivity to gap modulation; however, fully collapsing the gap is not always optimal, and intermediate, task-dependent separation yields the best results. These findings position the modality gap as a tunable property of multimodal representations rather than a quantity that should be universally minimized.
论医学视觉-语言嵌入中的锥体效应与模态鸿沟 / On the Cone Effect and Modality Gap in Medical Vision-Language Embeddings
这篇论文研究发现,在医学视觉-语言模型中,图像和文本特征之间的差异(模态鸿沟)并非越小越好,通过一个简单方法调节这个差异的大小,可以找到最适合具体医学任务的最佳状态,从而提升模型性能。
源自 arXiv: 2603.17246