IRIS:基于光线交集的隐式可编辑场景 / IRIS: Intersection-aware Ray-based Implicit Editable Scenes
1️⃣ 一句话总结
这篇论文提出了一种名为IRIS的新方法,它通过精确计算光线与场景元素的交点并沿光线直接聚合特征,实现了高质量、实时渲染且易于编辑的3D场景建模,解决了现有技术在效率和交互性上的瓶颈。
Neural Radiance Fields achieve high-fidelity scene representation but suffer from costly training and rendering, while 3D Gaussian splatting offers real-time performance with strong empirical results. Recently, solutions that harness the best of both worlds by using Gaussians as proxies to guide neural field evaluations, still suffer from significant computational inefficiencies. They typically rely on stochastic volumetric sampling to aggregate features, which severely limits rendering performance. To address this issue, a novel framework named IRIS (Intersection-aware Ray-based Implicit Editable Scenes) is introduced as a method designed for efficient and interactive scene editing. To overcome the limitations of standard ray marching, an analytical sampling strategy is employed that precisely identifies interaction points between rays and scene primitives, effectively eliminating empty space processing. Furthermore, to address the computational bottleneck of spatial neighbor lookups, a continuous feature aggregation mechanism is introduced that operates directly along the ray. By interpolating latent attributes from sorted intersections, costly 3D searches are bypassed, ensuring geometric consistency, enabling high-fidelity, real-time rendering, and flexible shape editing. Code can be found at this https URL.
IRIS:基于光线交集的隐式可编辑场景 / IRIS: Intersection-aware Ray-based Implicit Editable Scenes
这篇论文提出了一种名为IRIS的新方法,它通过精确计算光线与场景元素的交点并沿光线直接聚合特征,实现了高质量、实时渲染且易于编辑的3D场景建模,解决了现有技术在效率和交互性上的瓶颈。
源自 arXiv: 2603.15368