迈向聚焦形状恢复中的最小焦栈 / Towards Minimal Focal Stack in Shape from Focus
1️⃣ 一句话总结
这篇论文提出了一种创新的焦栈增强方法,使得聚焦形状恢复技术仅需两张不同焦距的图像就能精确重建三维深度,大大降低了传统方法对大量输入图像的需求,同时保持了顶尖的精度。
Shape from Focus (SFF) is a depth reconstruction technique that estimates scene structure from focus variations observed across a focal stack, that is, a sequence of images captured at different focus settings. A key limitation of SFF methods is their reliance on densely sampled, large focal stacks, which limits their practical applicability. In this study, we propose a focal stack augmentation that enables SFF methods to estimate depth using a reduced stack of just two images, without sacrificing precision. We introduce a simple yet effective physics-based focal stack augmentation that enriches the stack with two auxiliary cues: an all-in-focus (AiF) image estimated from two input images, and Energy-of-Difference (EOD) maps, computed as the energy of differences between the AiF and input images. Furthermore, we propose a deep network that computes a deep focus volume from the augmented focal stacks and iteratively refines depth using convolutional Gated Recurrent Units (ConvGRUs) at multiple scales. Extensive experiments on both synthetic and real-world datasets demonstrate that the proposed augmentation benefits existing state-of-the-art SFF models, enabling them to achieve comparable accuracy. The results also show that our approach maintains state-of-the-art performance with a minimal stack size.
迈向聚焦形状恢复中的最小焦栈 / Towards Minimal Focal Stack in Shape from Focus
这篇论文提出了一种创新的焦栈增强方法,使得聚焦形状恢复技术仅需两张不同焦距的图像就能精确重建三维深度,大大降低了传统方法对大量输入图像的需求,同时保持了顶尖的精度。
源自 arXiv: 2604.01603