菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-20
📄 Abstract - Discriminative-Generative Synergy for Occlusion Robust 3D Human Mesh Recovery

3D human mesh recovery from monocular RGB images aims to estimate anatomically plausible 3D human models for downstream applications, but remains challenging under partial or severe occlusions. Regression-based methods are efficient yet often produce implausible or inaccurate results in unconstrained scenarios, while diffusion-based methods provide strong generative priors for occluded regions but may weaken fidelity to rare poses due to over-reliance on generation. To address these limitations, we propose a brain-inspired synergistic framework that integrates the discriminative power of vision transformers with the generative capability of conditional diffusion models. Specifically, the ViT-based pathway extracts deterministic visual cues from visible regions, while the diffusion-based pathway synthesizes structurally coherent human body representations. To effectively bridge the two pathways, we design a diverse-consistent feature learning module to align discriminative features with generative priors, and a cross-attention multi-level fusion mechanism to enable bidirectional interaction across semantic levels. Experiments on standard benchmarks demonstrate that our method achieves superior performance on key metrics and shows strong robustness in complex real-world scenarios.

顶级标签: computer vision machine learning
详细标签: 3d human mesh recovery occlusion robust diffusion model vision transformer fusion 或 搜索:

判别-生成协同框架:面向遮挡鲁棒的3D人体网格恢复 / Discriminative-Generative Synergy for Occlusion Robust 3D Human Mesh Recovery


1️⃣ 一句话总结

本文提出了一种模仿人脑机制的混合框架,将视觉Transformer的判别能力与扩散模型的生成能力相结合,通过特征对齐与跨层级融合,在严重遮挡下仍能准确恢复真实感十足的3D人体模型。

源自 arXiv: 2604.21712