Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Human MotionFormer: Transferring Human Motions with Vision Transformers

Hongyu Liu · Xintong Han · Cheng-Bin Jin · Lihui Qian · Huawei Wei · Zhe Lin · Faqiang Wang · Haoye Dong · Yibing Song · Jia Xu · Qifeng Chen

Keywords: [ Applications ]


Abstract:

Human motion transfer aims to transfer motions from a target dynamic person to a source static one for motion synthesis. An accurate matching between the source person and the target motion in both large and subtle motion changes is vital for improving the transferred motion quality. In this paper, we propose Human MotionFormer, a hierarchical ViT framework that leverages global and local perceptions to capture large and subtle motion matching, respectively. It consists of two ViT encoders to extract input features (i.e., a target motion image and a source human image) and a ViT decoder with several cascaded blocks for feature matching and motion transfer. In each block, we set the target motion feature as Query and the source person as Key and Value, calculating the cross-attention maps to conduct a global feature matching. Further, we introduce a convolutional layer to improve the local perception after the global cross-attention computations. This matching process is implemented in both warping and generation branches to guide the motion transfer. During training, we propose a mutual learning loss to enable the co-supervision between warping and generation branches for better motion representations. Experiments show that our Human MotionFormer sets the new state-of-the-art performance both qualitatively and quantitatively. Project page: https://github.com/KumapowerLIU/Human-MotionFormer.

Chat is not available.