PEARL:几何对齐语义,实现免训练开放词汇语义分割 / PEARL: Geometry Aligns Semantics for Training-Free Open-Vocabulary Semantic Segmentation
1️⃣ 一句话总结
这篇论文提出了一种名为PEARL的免训练新方法,它通过几何对齐和文本引导的图传播两个简单步骤,高效地将图像中的物体分割出来并识别为任意文本描述的类别,无需额外训练数据或复杂模型,性能达到了当前最佳水平。
Training-free open-vocabulary semantic segmentation (OVSS) promises rapid adaptation to new label sets without retraining. Yet, many methods rely on heavy post-processing or handle text and vision in isolation, leaving cross-modal geometry underutilized. Others introduce auxiliary vision backbones or multi-model pipelines, which increase complexity and latency while compromising design simplicity. We present PEARL, \textbf{\underline{P}}rocrust\textbf{\underline{e}}s \textbf{\underline{a}}lignment with text-awa\textbf{\underline{r}}e \textbf{\underline{L}}aplacian propagation, a compact two-step inference that follows an align-then-propagate principle. The Procrustes alignment step performs an orthogonal projection inside the last self-attention block, rotating keys toward the query subspace via a stable polar iteration. The text-aware Laplacian propagation then refines per-pixel logits on a small grid through a confidence-weighted, text-guided graph solve: text provides both a data-trust signal and neighbor gating, while image gradients preserve boundaries. In this work, our method is fully training-free, plug-and-play, and uses only fixed constants, adding minimal latency with a small per-head projection and a few conjugate-gradient steps. Our approach, PEARL, sets a new state-of-the-art in training-free OVSS without extra data or auxiliary backbones across standard benchmarks, achieving superior performance under both with-background and without-background protocols.
PEARL:几何对齐语义,实现免训练开放词汇语义分割 / PEARL: Geometry Aligns Semantics for Training-Free Open-Vocabulary Semantic Segmentation
这篇论文提出了一种名为PEARL的免训练新方法,它通过几何对齐和文本引导的图传播两个简单步骤,高效地将图像中的物体分割出来并识别为任意文本描述的类别,无需额外训练数据或复杂模型,性能达到了当前最佳水平。
源自 arXiv: 2603.21528