PoseCrafter: One-Shot Personalized Video Synthesis Following Flexible Pose Control

Yong Zhong*1, Min Zhao*1, Zebin You1, Xiaofeng Yu2, Changwang Zhang2, Chongxuan Li**1,
1Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 2Huawei Technologies Co., Ltd

Abstract

In this paper, we introduce PoseCrafter, a one-shot method for personalized video generation following the control of flexible poses. Built upon Stable Diffusion and ControlNet, we carefully design an inference process to produce high-quality videos without the corresponding ground-truth frames. First, we select an appropriate reference frame from the training video and invert it to initialize all latent variables for generation. Then, we insert the corresponding training pose into the target pose sequences to enhance faithfulness through a trained temporal attention module. Furthermore, to alleviate the face and hand degradation resulting from discrepancies between poses of training videos and inference poses, we implement simple latent editing through an affine transformation matrix involving facial and hand landmarks. Extensive experiments on several datasets demonstrate that PoseCrafter achieves superior results to baselines pre-trained on a vast collection of videos under 8 commonly used metrics. Besides, PoseCrafter can follow poses from different individuals or artificial edits and simultaneously retain the human identity in an open-domain training video.

Method

Our method.

Built upon Stable Diffusion and ControlNet, PoseCrafter requires merely fine-tuning the pre-trained model on a single open-domain video. Technically, we carefully design an inference process to produce high-quality videos following flexible pose control without the corresponding frames. First, we select an appropriate reference frame from the training video and invert it to initialize all latent variables for generation. Then, we insert the corresponding training pose into the target pose sequences to enhance faithfulness through a trained temporal attention module. Furthermore, to alleviate the face and hand degradation resulting from discrepancies between poses of training videos and inference poses, we implement simple latent editing through an affine transformation matrix involving facial and hand landmarks.

Results

The video format is organized as follows: the first column contains training frames used for PoseCrafter training; the second column includes derived frames, from which poses will be extracted; the third column shows derived poses extracted from the derived frames; the fourth column presents frames generated by PoseCrafter, guided by the poses of the derived frames. Please note that PoseCrafter did not train on the derived frames and we only extracted inference poses from them.

Inference with Poses from the Same Individual

N=8, M=100

N=100, M=100

N=100, M=100

N=100, M=100

N=100, M=100

N=100, M=100

N=100, M=100 (+ red hair and beard)

Inference with Artificially Designed Poses

N=100, M=8

N=100, M=8

N=100, M=16

Inference with Poses from Different individuals

N=50, M=50

N=50, M=50

N=100, M=100

N=100, M=100

N=100, M=100

Comparsions

Ground Truth
PoseCrafter
(N=8)
Disco
Fine-tuned
Disco
MagicAnimate
ControlVideo
GEN-2
Ground Truth
PoseCrafter
(N=32)
PoseCrafter
(N=8)
Disco
Fine-tuned
Disco
MagicAnimate
GEN-2

BibTeX

@article{zhong2024posecrafter,
  title={PoseCrafter: One-Shot Personalized Video Synthesis Following Flexible Pose Control},
  author={Zhong, Yong and Zhao, Min and You, Zebin and Yu, Xiaofeng and Zhang, Changwang and Li, Chongxuan},
  journal={arXiv preprint arXiv:2405.14582},
  year={2024}
}