SEP-YOLO:用于透明物体实例分割的傅里叶域特征表示方法 / SEP-YOLO: Fourier-Domain Feature Representation for Transparent Object Instance Segmentation
1️⃣ 一句话总结
这篇论文提出了一种名为SEP-YOLO的新方法,它通过结合频域增强和空间细化技术,有效解决了透明物体因边界模糊和低对比度而难以精确分割的问题,并在公开数据集上取得了领先的性能。
Transparent object instance segmentation presents significant challenges in computer vision, due to the inherent properties of transparent objects, including boundary blur, low contrast, and high dependence on background context. Existing methods often fail as they depend on strong appearance cues and clear boundaries. To address these limitations, we propose SEP-YOLO, a novel framework that integrates a dual-domain collaborative mechanism for transparent object instance segmentation. Our method incorporates a Frequency Domain Detail Enhancement Module, which separates and enhances weak highfrequency boundary components via learnable complex weights. We further design a multi-scale spatial refinement stream, which consists of a Content-Aware Alignment Neck and a Multi-scale Gated Refinement Block, to ensure precise feature alignment and boundary localization in deep semantic features. We also provide high-quality instance-level annotations for the Trans10K dataset, filling the critical data gap in transparent object instance segmentation. Extensive experiments on the Trans10K and GVD datasets show that SEP-YOLO achieves state-of-the-art (SOTA) performance.
SEP-YOLO:用于透明物体实例分割的傅里叶域特征表示方法 / SEP-YOLO: Fourier-Domain Feature Representation for Transparent Object Instance Segmentation
这篇论文提出了一种名为SEP-YOLO的新方法,它通过结合频域增强和空间细化技术,有效解决了透明物体因边界模糊和低对比度而难以精确分割的问题,并在公开数据集上取得了领先的性能。
源自 arXiv: 2603.02648