Skip to content

Tencent AI Lab Introduces Progressive Conditional Diffusion Models (PCDMs) that Incrementally Bridge the Gap Between Person Images Under the Target and Source Poses Through Three Stages Mahmoud Ghorbel Artificial Intelligence Category – MarkTechPost

  • by

The field of research in pose-guided person image synthesis has made significant progress in recent years, focusing on generating images of a person with the same appearance but under a different pose. This technology has broad applications in e-commerce content generation and can improve downstream tasks like person re-identification. However, it faces several challenges, primarily due to inconsistencies between the source and target poses.

Researchers have explored various GAN-based, VAE-based, and flow-based techniques to address pose-guided person image synthesis challenges. GAN-based approaches require stable training and may produce unrealistic results. VAE-based methods may blur details and misalign poses, while flow-based models can introduce artifacts. Some methods use parsing maps but struggle with style and texture. Diffusion models show promise but face challenges related to pose inconsistencies, which must be addressed for improved results.

 To tackle these issues, a recently published paper introduces Progressive Conditional Diffusion Models (PCDMs), which progressively generate high-quality images in three stages: predicting global features, establishing dense correspondences, and refining images for better texture and detail consistency.

The proposed method offers significant contributions in pose-guided person image synthesis. It introduces a simple prior conditional diffusion model that generates global target image features by revealing the alignment between source image appearance and target pose coordinates. An innovative inpainting conditional diffusion model establishes dense correspondences, transforming unaligned image-to-image generation into an aligned process. Additionally, a refining conditional diffusion model enhances image quality and fidelity.

 (PCDMs) consist of three key stages, each contributing to the overall image synthesis process:

2) Prior Conditional Diffusion Model: In the first stage, the model predicts the global features of the target image by leveraging the alignment relationship between pose coordinates and image appearance. The model uses a transformer network conditioned on the pose of the source and target images and the source image. The global image embedding, obtained from CLIP image encoder, guides the target image synthesis. The loss function for this stage encourages the model to predict the un-noised image embedding directly. This stage bridges the gap between the source and target images at the feature level.

2) Inpainting Conditional Diffusion Model: The inpainting conditional diffusion model is introduced in the second stage. It leverages the global features obtained in the prior stage to establish dense correspondences between the source and target images, effectively transforming the unaligned image-to-image generation task into an aligned one. This stage ensures that the source and target images and their respective poses are aligned at multiple levels, including image, pose, and feature. It aims to improve the alignment between source and target images and is crucial for generating realistic results.

3) Refining Conditional Diffusion Model: After generating a preliminary coarse-grained target image in the previous stage, the refining conditional diffusion model enhances image quality and detail texture. This stage utilizes the coarse-grained image generated during the last stage as a condition to improve image fidelity and texture consistency further. It involves modifying the first convolutional layer and using an image encoder to extract features from the source image. The cross-attention mechanism infuses texture features into the network for texture repair and detail enhancement.

The method is validated through comprehensive experiments on public datasets, demonstrating competitive performance via quantitative metrics (SSIM, LPIPS, FID). A user study further validated the method’s effectiveness. An ablation study examined the impact of individual stages of the PCDMs, highlighting their importance. Lastly, the applicability of PCDMs in person re-identification was demonstrated, showcasing improved re-identification performance compared to baseline methods.

In conclusion, PCDMs present a notable breakthrough in pose-guided person image synthesis. Using a multi-stage approach, PCDMs effectively address alignment and pose consistency issues, producing high-quality, realistic images. The experiments showcase their superior performance in quantitative metrics and user studies, and their applicability to person re-identification tasks further highlights their practical utility. PCDMs offer a promising solution for a wide range of applications, advancing the field of pose-guided image synthesis.

Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 32k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

We are also on Telegram and WhatsApp.

The post Tencent AI Lab Introduces Progressive Conditional Diffusion Models (PCDMs) that Incrementally Bridge the Gap Between Person Images Under the Target and Source Poses Through Three Stages appeared first on MarkTechPost.

 The field of research in pose-guided person image synthesis has made significant progress in recent years, focusing on generating images of a person with the same appearance but under a different pose. This technology has broad applications in e-commerce content generation and can improve downstream tasks like person re-identification. However, it faces several challenges, primarily
The post Tencent AI Lab Introduces Progressive Conditional Diffusion Models (PCDMs) that Incrementally Bridge the Gap Between Person Images Under the Target and Source Poses Through Three Stages appeared first on MarkTechPost.  Read More AI Shorts, Applications, Artificial Intelligence, Computer Vision, Editors Pick, Machine Learning, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *