菜单

🤖 系统
📄 Abstract - FMA-Net++: Motion- and Exposure-Aware Real-World Joint Video Super-Resolution and Deblurring

Real-world video restoration is plagued by complex degradations from motion coupled with dynamically varying exposure - a key challenge largely overlooked by prior works and a common artifact of auto-exposure or low-light capture. We present FMA-Net++, a framework for joint video super-resolution and deblurring that explicitly models this coupled effect of motion and dynamically varying exposure. FMA-Net++ adopts a sequence-level architecture built from Hierarchical Refinement with Bidirectional Propagation blocks, enabling parallel, long-range temporal modeling. Within each block, an Exposure Time-aware Modulation layer conditions features on per-frame exposure, which in turn drives an exposure-aware Flow-Guided Dynamic Filtering module to infer motion- and exposure-aware degradation kernels. FMA-Net++ decouples degradation learning from restoration: the former predicts exposure- and motion-aware priors to guide the latter, improving both accuracy and efficiency. To evaluate under realistic capture conditions, we introduce REDS-ME (multi-exposure) and REDS-RE (random-exposure) benchmarks. Trained solely on synthetic data, FMA-Net++ achieves state-of-the-art accuracy and temporal consistency on our new benchmarks and GoPro, outperforming recent methods in both restoration quality and inference speed, and generalizes well to challenging real-world videos.

顶级标签: computer vision video model training
详细标签: video restoration super-resolution deblurring motion modeling exposure-aware 或 搜索:

FMA-Net++:一种感知运动与动态曝光的真实世界视频超分辨率与去模糊联合处理框架 / FMA-Net++: Motion- and Exposure-Aware Real-World Joint Video Super-Resolution and Deblurring


1️⃣ 一句话总结

这篇论文提出了一种名为FMA-Net++的新方法,它通过专门建模运动与动态变化的曝光之间的耦合效应,能够更有效地联合提升真实世界视频的清晰度和分辨率,并且在新的测试基准上取得了领先的修复效果和运行速度。


📄 打开原文 PDF