📄
Abstract - Vision Transformers for Zero-Shot Clustering of Animal Images: A Comparative Benchmarking Study
Manual labeling of animal images remains a significant bottleneck in ecological research, limiting the scale and efficiency of biodiversity monitoring efforts. This study investigates whether state-of-the-art Vision Transformer (ViT) foundation models can reduce thousands of unlabeled animal images directly to species-level clusters. We present a comprehensive benchmarking framework evaluating five ViT models combined with five dimensionality reduction techniques and four clustering algorithms, two supervised and two unsupervised, across 60 species (30 mammals and 30 birds), with each test using a random subset of 200 validated images per species. We investigate when clustering succeeds at species-level, where it fails, and whether clustering within the species-level reveals ecologically meaningful patterns such as sex, age, or phenotypic variation. Our results demonstrate near-perfect species-level clustering (V-measure: 0.958) using DINOv3 embeddings with t-SNE and supervised hierarchical clustering methods. Unsupervised approaches achieve competitive performance (0.943) while requiring no prior species knowledge, rejecting only 1.14% of images as outliers requiring expert review. We further demonstrate robustness to realistic long-tailed distributions of species and show that intentional over-clustering can reliably extract intra-specific variation including age classes, sexual dimorphism, and pelage differences. We introduce an open-source benchmarking toolkit and provide recommendations for ecologists to select appropriate methods for sorting their specific taxonomic groups and data.
基于视觉Transformer的动物图像零样本聚类:一项比较性基准研究 /
Vision Transformers for Zero-Shot Clustering of Animal Images: A Comparative Benchmarking Study
1️⃣ 一句话总结
这项研究证明,利用先进的视觉Transformer模型,无需物种标签就能高效地将大量动物图像自动聚类到物种级别,甚至能进一步识别出性别、年龄等生态学上有意义的亚类,为生态学研究提供了一个强大的自动化图像分析工具。