菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-20
📄 Abstract - Dual-stream Spatio-Temporal GCN-Transformer Network for 3D Human Pose Estimation

3D human pose estimation is a classic and important research direction in the field of computer vision. In recent years, Transformer-based methods have made significant progress in lifting 2D to 3D human pose estimation. However, these methods primarily focus on modeling global temporal and spatial relationships, neglecting local skeletal relationships and the information interaction between different channels. Therefore, we have proposed a novel method,the Dual-stream Spatio-temporal GCN-Transformer Network (MixTGFormer). This method models the spatial and temporal relationships of human skeletons simultaneously through two parallel channels, achieving effective fusion of global and local features. The core of MixTGFormer is composed of stacked Mixformers. Specifically, the Mixformer includes the Mixformer Block and the Squeeze-and-Excitation Layer ( SE Layer). It first extracts and fuses various information of human skeletons through two parallel Mixformer Blocks with different modes. Then, it further supplements the fused information through the SE Layer. The Mixformer Block integrates Graph Convolutional Networks (GCN) into the Transformer, enhancing both local and global information utilization. Additionally, we further implement its temporal and spatial forms to extract both spatial and temporal relationships. We extensively evaluated our model on two benchmark datasets (Human3.6M and MPI-INF-3DHP). The experimental results showed that, compared to other methods, our MixTGFormer achieved state-of-the-art results, with P1 errors of 37.6mm and 15.7mm on these datasets, respectively.

顶级标签: computer vision model training machine learning
详细标签: 3d human pose estimation spatio-temporal modeling graph convolutional networks transformer skeleton analysis 或 搜索:

用于3D人体姿态估计的双流时空图卷积网络-Transformer网络 / Dual-stream Spatio-Temporal GCN-Transformer Network for 3D Human Pose Estimation


1️⃣ 一句话总结

这篇论文提出了一种名为MixTGFormer的新方法,通过结合图卷积网络和Transformer的优势,同时捕捉人体骨架在空间和时间上的全局与局部关系,从而在3D人体姿态估计任务上取得了领先的性能。

源自 arXiv: 2604.17688