菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-09
📄 Abstract - Low-Light Video Enhancement with An Effective Spatial-Temporal Decomposition Paradigm

Low-Light Video Enhancement (LLVE) seeks to restore dynamic or static scenes plagued by severe invisibility and noise. In this paper, we present an innovative video decomposition strategy that incorporates view-independent and view-dependent components to enhance the performance of LLVE. The framework is called View-aware Low-light Video Enhancement (VLLVE). We leverage dynamic cross-frame correspondences for the view-independent term (which primarily captures intrinsic appearance) and impose a scene-level continuity constraint on the view-dependent term (which mainly describes the shading condition) to achieve consistent and satisfactory decomposition results. To further ensure consistent decomposition, we introduce a dual-structure enhancement network featuring a cross-frame interaction mechanism. By supervising different frames simultaneously, this network encourages them to exhibit matching decomposition features. This mechanism can seamlessly integrate with encoder-decoder single-frame networks, incurring minimal additional parameter costs. Building upon VLLVE, we propose a more comprehensive decomposition strategy by introducing an additive residual term, resulting in VLLVE++. This residual term can simulate scene-adaptive degradations, which are difficult to model using a decomposition formulation for common scenes, thereby further enhancing the ability to capture the overall content of videos. In addition, VLLVE++ enables bidirectional learning for both enhancement and degradation-aware correspondence refinement (end-to-end manner), effectively increasing reliable correspondences while filtering out incorrect ones. Notably, VLLVE++ demonstrates strong capability in handling challenging cases, such as real-world scenes and videos with high dynamics. Extensive experiments are conducted on widely recognized LLVE benchmarks.

顶级标签: computer vision video model training
详细标签: low-light enhancement video decomposition spatial-temporal modeling neural networks benchmark evaluation 或 搜索:

一种有效的时空分解范式下的低光视频增强 / Low-Light Video Enhancement with An Effective Spatial-Temporal Decomposition Paradigm


1️⃣ 一句话总结

这篇论文提出了一种名为VLLVE++的新方法,通过将视频内容智能分解为不同部分并分别处理,有效提升了昏暗、有噪点视频的画质,尤其在处理真实世界动态场景时表现优异。

源自 arXiv: 2602.08699