Inst4DGS:基于多视频标签排列学习的实例分解4D高斯泼溅方法 / Inst4DGS: Instance-Decomposed 4D Gaussian Splatting with Multi-Video Label Permutation Learning
1️⃣ 一句话总结
这篇论文提出了一种名为Inst4DGS的新方法,它通过一种可学习的标签对齐技术,成功解决了在多视角视频中动态分解和稳定追踪不同物体实例的难题,从而在三维场景重建和物体分割质量上都取得了显著提升。
We present Inst4DGS, an instance-decomposed 4D Gaussian Splatting (4DGS) approach with long-horizon per-Gaussian trajectories. While dynamic 4DGS has advanced rapidly, instance-decomposed 4DGS remains underexplored, largely due to the difficulty of associating inconsistent instance labels across independently segmented multi-view videos. We address this challenge by introducing per-video label-permutation latents that learn cross-video instance matches through a differentiable Sinkhorn layer, enabling direct multi-view supervision with consistent identity preservation. This explicit label alignment yields sharp decision boundaries and temporally stable identities without identity drift. To further improve efficiency, we propose instance-decomposed motion scaffolds that provide low-dimensional motion bases per object for long-horizon trajectory optimization. Experiments on Panoptic Studio and Neural3DV show that Inst4DGS jointly supports tracking and instance decomposition while achieving state-of-the-art rendering and segmentation quality. On the Panoptic Studio dataset, Inst4DGS improves PSNR from 26.10 to 28.36, and instance mIoU from 0.6310 to 0.9129, over the strongest baseline.
Inst4DGS:基于多视频标签排列学习的实例分解4D高斯泼溅方法 / Inst4DGS: Instance-Decomposed 4D Gaussian Splatting with Multi-Video Label Permutation Learning
这篇论文提出了一种名为Inst4DGS的新方法,它通过一种可学习的标签对齐技术,成功解决了在多视角视频中动态分解和稳定追踪不同物体实例的难题,从而在三维场景重建和物体分割质量上都取得了显著提升。
源自 arXiv: 2603.18402