Skip to yearly menu bar Skip to main content


Poster

Advancing Pose-Guided Image Synthesis with Progressive Conditional Diffusion Models

Fei Shen · Hu Ye · Jun Zhang · Cong Wang · Xiao Han · Yang Wei

Halle B #289
[ ]
Tue 7 May 1:45 a.m. PDT — 3:45 a.m. PDT

Abstract:

Recent work has showcased the significant potential of diffusion models in pose-guided person image synthesis.However, owing to the inconsistency in pose between the source and target images, synthesizing an image with a distinct pose, relying exclusively on the source image and target pose information, remains a formidable challenge.This paper presents Progressive Conditional Diffusion Models (PCDMs) that incrementally bridge the gap between person images under the target and source poses through three stages.Specifically, in the first stage, we design a simple prior conditional diffusion model that predicts the global features of the target image by mining the global alignment relationship between pose coordinates and image appearance.Then, the second stage establishes a dense correspondence between the source and target images using the global features from the previous stage, and an inpainting conditional diffusion model is proposed to further align and enhance the contextual features, generating a coarse-grained person image.In the third stage, we propose a refining conditional diffusion model to utilize the coarsely generated image from the previous stage as a condition, achieving texture restoration and enhancing fine-detail consistency.The three-stage PCDMs work progressively to generate the final high-quality and high-fidelity synthesized image.Both qualitative and quantitative results demonstrate the consistency and photorealism of our proposed PCDMs under challenging scenarios.The code and model will be available at https://github.com/tencent-ailab/PCDMs.

Live content is unavailable. Log in and register to view live content