菜单

🤖 系统
📄 Abstract - RF-DETR: Neural Architecture Search for Real-Time Detection Transformers

Open-vocabulary detectors achieve impressive performance on COCO, but often fail to generalize to real-world datasets with out-of-distribution classes not typically found in their pre-training. Rather than simply fine-tuning a heavy-weight vision-language model (VLM) for new domains, we introduce RF-DETR, a light-weight specialist detection transformer that discovers accuracy-latency Pareto curves for any target dataset with weight-sharing neural architecture search (NAS). Our approach fine-tunes a pre-trained base network on a target dataset and evaluates thousands of network configurations with different accuracy-latency tradeoffs without re-training. Further, we revisit the "tunable knobs" for NAS to improve the transferability of DETRs to diverse target domains. Notably, RF-DETR significantly improves on prior state-of-the-art real-time methods on COCO and Roboflow100-VL. RF-DETR (nano) achieves 48.0 AP on COCO, beating D-FINE (nano) by 5.3 AP at similar latency, and RF-DETR (2x-large) outperforms GroundingDINO (tiny) by 1.2 AP on Roboflow100-VL while running 20x as fast. To the best of our knowledge, RF-DETR (2x-large) is the first real-time detector to surpass 60 AP on COCO. Our code is at this https URL

顶级标签: computer vision model training model evaluation
详细标签: object detection neural architecture search real-time detection transformers accuracy-latency tradeoff 或 搜索:

📄 论文总结

RF-DETR:面向实时检测Transformer的神经架构搜索 / RF-DETR: Neural Architecture Search for Real-Time Detection Transformers


1️⃣ 一句话总结

这篇论文提出了RF-DETR,一种通过神经架构搜索自动寻找最佳速度和精度平衡的轻量级目标检测模型,在多个数据集上显著超越了现有实时检测方法的性能。


📄 打开原文 PDF