菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-03
📄 Abstract - Drift-Resilient Temporal Priors for Visual Tracking

Temporal information is crucial for visual tracking, but existing multi-frame trackers are vulnerable to model drift caused by naively aggregating noisy historical predictions. In this paper, we introduce DTPTrack, a lightweight and generalizable module designed to be seamlessly integrated into existing trackers to suppress drift. Our framework consists of two core components: (1) a Temporal Reliability Calibrator (TRC) mechanism that learns to assign a per-frame reliability score to historical states, filtering out noise while anchoring on the ground-truth template; and (2) a Temporal Guidance Synthesizer (TGS) module that synthesizes this calibrated history into a compact set of dynamic temporal priors to provide predictive guidance. To demonstrate its versatility, we integrate DTPTrack into three diverse tracking architectures--OSTrack, ODTrack, and LoRAT-and show consistent, significant performance gains across all baselines. Our best-performing model, built upon an extended LoRATv2 backbone, sets a new state-of-the-art on several benchmarks, achieving a 77.5% Success rate on LaSOT and an 80.3% AO on GOT-10k.

顶级标签: computer vision model training model evaluation
详细标签: visual tracking temporal modeling model drift benchmark multi-frame tracking 或 搜索:

用于视觉跟踪的抗漂移时序先验 / Drift-Resilient Temporal Priors for Visual Tracking


1️⃣ 一句话总结

这篇论文提出了一个名为DTPTrack的通用模块,它能通过评估历史帧的可靠性并合成动态时序先验,有效抑制视觉跟踪中因累积错误预测导致的模型漂移问题,从而显著提升多种现有跟踪器的性能。

源自 arXiv: 2604.02654