MUA:移动端超精细可驱动数字人 / MUA: Mobile Ultra-detailed Animatable Avatars
1️⃣ 一句话总结
这篇论文提出了一种新的可驱动数字人技术,它通过一种创新的压缩和知识蒸馏方法,将原本只能在高端服务器上运行的超精细数字人模型,大幅压缩后成功部署到手机或VR头显等移动设备上,并能以实时帧率运行,同时保持了逼真的动态细节和外观。
Building photorealistic, animatable full-body digital humans remains a longstanding challenge in computer graphics and vision. Recent advances in animatable avatar modeling have largely progressed along two directions: improving the fidelity of dynamic geometry and appearance, or reducing computational complexity to enable deployment on resource-constrained platforms, e.g., VR headsets. However, existing approaches fail to achieve both goals simultaneously: Ultra-high-fidelity avatars typically require substantial computation on server-class GPUs, whereas lightweight avatars often suffer from limited surface dynamics, reduced appearance details, and noticeable artifacts. To bridge this gap, we propose a novel animatable avatar representation, termed Wavelet-guided Multi-level Spatial Factorized Blendshapes, and a corresponding distillation pipeline that transfers motion-aware clothing dynamics and fine-grained appearance details from a pre-trained ultra-high-quality avatar model into a compact, efficient representation. By coupling multi-level wavelet spectral decomposition with low-rank structural factorization in texture space, our method achieves up to 2000X lower computational cost and a 10X smaller model size than the original high-quality teacher avatar model, while preserving visually plausible dynamics and appearance details closely resemble those of the teacher model. Extensive comparisons with state-of-the-art methods show that our approach significantly outperforms existing avatar approaches designed for mobile settings and achieves comparable or superior rendering quality to most approaches that can only run on servers. Importantly, our representation substantially improves the practicality of high-fidelity avatars for immersive applications, achieving over 180 FPS on a desktop PC and real-time native on-device performance at 24 FPS on a standalone Meta Quest 3.
MUA:移动端超精细可驱动数字人 / MUA: Mobile Ultra-detailed Animatable Avatars
这篇论文提出了一种新的可驱动数字人技术,它通过一种创新的压缩和知识蒸馏方法,将原本只能在高端服务器上运行的超精细数字人模型,大幅压缩后成功部署到手机或VR头显等移动设备上,并能以实时帧率运行,同时保持了逼真的动态细节和外观。
源自 arXiv: 2604.18583