📄
Abstract - What DINO saw: ALiBi positional encoding reduces positional bias in Vision Transformers
Vision transformers (ViTs) - especially feature foundation models like DINOv2 - learn rich representations useful for many downstream tasks. However, architectural choices (such as positional encoding) can lead to these models displaying positional biases and artefacts independent of semantic content. This makes zero-shot adaption difficult in fields like material science, where images are often cross-sections of homogeneous microstructure (i.e. having no preferred direction). In this work, we investigate the positional bias in ViTs via linear probing, finding it present across a range of objectives and positional encodings, and subsequently reduce it by finetuning models to use ALiBi relative positional encoding. We demonstrate that these models retain desirable general semantics and their unbiased features can be used successfully in trainable segmentation of complex microscopy images.
DINO看到了什么:ALiBi位置编码减少视觉Transformer中的位置偏差 /
What DINO saw: ALiBi positional encoding reduces positional bias in Vision Transformers
1️⃣ 一句话总结
这篇论文发现像DINOv2这样的视觉Transformer模型在处理图像时,会因为位置编码而产生与内容无关的位置偏差,尤其是在材料科学等领域的均匀结构图像中影响零样本适应能力;作者通过微调模型采用ALiBi相对位置编码,有效减少了这种偏差,同时保持了模型良好的语义特征,使其能更好地用于复杂显微镜图像的分割任务。