菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-09
📄 Abstract - D$^2$-VR: Degradation-Robust and Distilled Video Restoration with Synergistic Optimization Strategy

The integration of diffusion priors with temporal alignment has emerged as a transformative paradigm for video restoration, delivering fantastic perceptual quality, yet the practical deployment of such frameworks is severely constrained by prohibitive inference latency and temporal instability when confronted with complex real-world degradations. To address these limitations, we propose \textbf{D$^2$-VR}, a single-image diffusion-based video-restoration framework with low-step inference. To obtain precise temporal guidance under severe degradation, we first design a Degradation-Robust Flow Alignment (DRFA) module that leverages confidence-aware attention to filter unreliable motion cues. We then incorporate an adversarial distillation paradigm to compress the diffusion sampling trajectory into a rapid few-step regime. Finally, a synergistic optimization strategy is devised to harmonize perceptual quality with rigorous temporal consistency. Extensive experiments demonstrate that D$^2$-VR achieves state-of-the-art performance while accelerating the sampling process by \textbf{12$\times$}

顶级标签: video model training computer vision
详细标签: video restoration diffusion models temporal alignment knowledge distillation adversarial training 或 搜索:

D^2-VR:基于协同优化策略的、抗退化且经过蒸馏的视频修复方法 / D$^2$-VR: Degradation-Robust and Distilled Video Restoration with Synergistic Optimization Strategy


1️⃣ 一句话总结

这篇论文提出了一种名为D^2-VR的新方法,它通过设计抗退化的运动对齐模块和采用对抗性蒸馏技术,在保证视频修复高质量的同时,将处理速度大幅提升了12倍,有效解决了现有方法速度慢且面对复杂画面退化时效果不稳定的问题。

源自 arXiv: 2602.08395