Exploiting Temporal Contexts with Strided Transformer for 3D Human Pose Estimation


الملخص بالإنكليزية

Despite great progress in 3D human pose estimation from videos, it is still an open problem to take full advantage of redundant 2D pose sequences to learn representative representation for generating one single 3D pose. To this end, we propose an improved Transformer-based architecture, called Strided Transformer, for 3D human pose estimation in videos to lift a sequence of 2D joint locations to a 3D pose. Specifically, a vanilla Transformer encoder (VTE) is adopted to model long-range dependencies of 2D pose sequences. To reduce redundancy of the sequence and aggregate information from local context, strided convolutions are incorporated into VTE to progressively reduce the sequence length. The modified VTE is termed as strided Transformer encoder (STE) which is built upon the outputs of VTE. STE not only effectively aggregates long-range information to a single-vector representation in a hierarchical global and local fashion but also significantly reduces the computation cost. Furthermore, a full-to-single supervision scheme is designed at both the full sequence scale and single target frame scale, applied to the outputs of VTE and STE, respectively. This scheme imposes extra temporal smoothness constraints in conjunction with the single target frame supervision and improves the representation ability of features for the target frame. The proposed architecture is evaluated on two challenging benchmark datasets, Human3.6M and HumanEva-I, and achieves state-of-the-art results with much fewer parameters.

تحميل البحث