探索基于视觉-语言表征的谱聚类 / Delving into Spectral Clustering with Vision-Language Representations
1️⃣ 一句话总结
这篇论文提出了一种利用预训练视觉-语言模型进行多模态谱聚类的新方法,通过结合图像的视觉相似性和语义重叠来提升聚类效果,在多个基准测试中大幅超越了现有技术。
Spectral clustering is known as a powerful technique in unsupervised data analysis. The vast majority of approaches to spectral clustering are driven by a single modality, leaving the rich information in multi-modal representations untapped. Inspired by the recent success of vision-language pre-training, this paper enriches the landscape of spectral clustering from a single-modal to a multi-modal regime. Particularly, we propose Neural Tangent Kernel Spectral Clustering that leverages cross-modal alignment in pre-trained vision-language models. By anchoring the neural tangent kernel with positive nouns, i.e., those semantically close to the images of interest, we arrive at formulating the affinity between images as a coupling of their visual proximity and semantic overlap. We show that this formulation amplifies within-cluster connections while suppressing spurious ones across clusters, hence encouraging block-diagonal structures. In addition, we present a regularized affinity diffusion mechanism that adaptively ensembles affinity matrices induced by different prompts. Extensive experiments on \textbf{16} benchmarks -- including classical, large-scale, fine-grained and domain-shifted datasets -- manifest that our method consistently outperforms the state-of-the-art by a large margin.
探索基于视觉-语言表征的谱聚类 / Delving into Spectral Clustering with Vision-Language Representations
这篇论文提出了一种利用预训练视觉-语言模型进行多模态谱聚类的新方法,通过结合图像的视觉相似性和语义重叠来提升聚类效果,在多个基准测试中大幅超越了现有技术。
源自 arXiv: 2602.09586