ruoque / Thin-Plate-Spline-Motion-Model

[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[CVPR2022] Thin-Plate Spline Motion Model for Image Animation

Source code of the CVPR'2022 paper "Thin-Plate Spline Motion Model for Image Animation".

Example animation

vox ted

PS: The paper trains the model for 100 epochs for a fair comparison. You can use more data and train for more epochs to get better performance.

Web demo for animation

Try the web demo for animation here Replicate

Installation

We support python3.(Recommended version is Python 3.9). To install the dependencies run:

pip install -r requirements.txt

YAML configs

There are several configuration files one for each dataset in the config folder named as config/dataset_name.yaml. See config/dataset.yaml to get the description of each parameter.

See description of the parameters in the config/taichi-256.yaml.

Datasets

  1. MGif. Follow Monkey-Net.

  2. TaiChiHD and VoxCeleb. Follow instructions from video-preprocessing.

  3. TED-talks. Follow instructions from MRAA.

Training

To train a model on specific dataset run:

CUDA_VISIBLE_DEVICES=0,1 python run.py --config config/dataset_name.yaml --device_ids 0,1

A log folder named after the timestamp will be created. Checkpoints, loss values, reconstruction results will be saved to this folder.

Training AVD network

To train a model on specific dataset run:

CUDA_VISIBLE_DEVICES=0 python run.py --mode train_avd --checkpoint '{checkpoint_folder}/checkpoint.pth.tar' --config config/dataset_name.yaml

Checkpoints, loss values, reconstruction results will be saved to {checkpoint_folder}.

Evaluation on video reconstruction

To evaluate the reconstruction performance run:

CUDA_VISIBLE_DEVICES=0 python run.py --mode reconstruction --config config/dataset_name.yaml --checkpoint '{checkpoint_folder}/checkpoint.pth.tar'

The reconstruction subfolder will be created in {checkpoint_folder}. The generated video will be stored to this folder, also generated videos will be stored in png subfolder in loss-less '.png' format for evaluation. To compute metrics, follow instructions from pose-evaluation.

Pre-trained models

Image animation demo

  • Google Colab: here
  • notebook: demo.ipynb, edit the config cell and run for image animation.
  • python:
CUDA_VISIBLE_DEVICES=0 python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox.pth.tar --source_image ./source.jpg --driving_video ./driving.mp4

Acknowledgments

The main code is based upon FOMM and MRAA

Thanks for the excellent works!

About

[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.

License:MIT License


Languages

Language:Jupyter Notebook 82.8%Language:Python 17.2%