任意面孔:从任意图像序列实现4D人脸重建 / Face Anything: 4D Face Reconstruction from Any Image Sequence
1️⃣ 一句话总结
本文提出了一种统一的人脸4D重建方法,通过预测每个像素在标准人脸空间中的坐标,能够从任意图像序列中同时恢复高精度的三维形状、表情变化和密集跟踪,相比现有方法将对应误差降低约3倍、深度精度提升16%。
Accurate reconstruction and tracking of dynamic human faces from image sequences is challenging because non-rigid deformations, expression changes, and viewpoint variations occur simultaneously, creating significant ambiguity in geometry and correspondence estimation. We present a unified method for high-fidelity 4D facial reconstruction based on canonical facial point prediction, a representation that assigns each pixel a normalized facial coordinate in a shared canonical space. This formulation transforms dense tracking and dynamic reconstruction into a canonical reconstruction problem, enabling temporally consistent geometry and reliable correspondences within a single feed-forward model. By jointly predicting depth and canonical coordinates, our method enables accurate depth estimation, temporally stable reconstruction, dense 3D geometry, and robust facial point tracking within a single architecture. We implement this formulation using a transformer-based model that jointly predicts depth and canonical facial coordinates, trained using multi-view geometry data that non-rigidly warps into the canonical space. Extensive experiments on image and video benchmarks demonstrate state-of-the-art performance across reconstruction and tracking tasks, achieving approximately 3$\times$ lower correspondence error and faster inference than prior dynamic reconstruction methods, while improving depth accuracy by 16%. These results highlight canonical facial point prediction as an effective foundation for unified feed-forward 4D facial reconstruction.
任意面孔:从任意图像序列实现4D人脸重建 / Face Anything: 4D Face Reconstruction from Any Image Sequence
本文提出了一种统一的人脸4D重建方法,通过预测每个像素在标准人脸空间中的坐标,能够从任意图像序列中同时恢复高精度的三维形状、表情变化和密集跟踪,相比现有方法将对应误差降低约3倍、深度精度提升16%。
源自 arXiv: 2604.19702