Skip to yearly menu bar Skip to main content


Spotlight Poster

Real3D-Portrait: One-shot Realistic 3D Talking Portrait Synthesis

Zhenhui Ye · Tianyun Zhong · Yi Ren · Jiaqi Yang · Weichuang Li · Jiawei Huang · Ziyue Jiang · Jinzheng He · Rongjie Huang · Jinglin Liu · Chen Zhang · Xiang Yin · Zejun MA · Zhou Zhao

Halle B #10
[ ] [ Project Page ]
Thu 9 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

One-shot 3D talking portrait generation aims to reconstruct a 3D avatar from an unseen image, and then animate it with a reference video or audio to generate a talking portrait video. The existing methods fail to simultaneously achieve the goals of accurate 3D avatar reconstruction and stable talking face animation. Besides, while the existing works mainly focus on synthesizing the head part, it is also vital to generate natural torso and background segments to obtain a realistic talking portrait video. To address these limitations, we present Real3D-Potrait, a framework that (1) improves the one-shot 3D reconstruction power with a large image-to-plane model that distills 3D prior knowledge from a 3D face generative model; (2) facilitates accurate motion-conditioned animation with an efficient motion adapter; (3) synthesizes realistic video with natural torso movement and switchable background using a head-torso-background super-resolution model; and (4) supports one-shot audio-driven talking face generation with a generalizable audio-to-motion model. Extensive experiments show that Real3D-Portrait generalizes well to unseen identities and generates more realistic talking portrait videos compared to previous methods. Video samples are available at https://real3dportrait.github.io.

Live content is unavailable. Log in and register to view live content