MingSun-Tse / MMVID

[CVPR 2022] Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning

Home Page:https://snap-research.github.io/MMVID/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MMVID
Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning (CVPR 2022)

Generated Videos on Multimodal VoxCeleb

This repo will contain the code for training and testing, models, and data for MMVID (coming soon).

Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning
Ligong Han, Jian Ren, Hsin-Ying Lee, Francesco Barbieri, Kyle Olszewski, Shervin Minaee, Dimitris Metaxas, Sergey Tulyakov
Snap Inc., Rutgers University
CVPR 2022

Citation

If our code, data, or models help your work, please cite our paper:

@article{han2022show,
  title={Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning},
  author={Han, Ligong and Ren, Jian and Lee, Hsin-Ying and Barbieri, Francesco and Olszewski, Kyle and Minaee, Shervin and Metaxas, Dimitris and Tulyakov, Sergey},
  journal={arXiv preprint arXiv:2203.02573},
  year={2022}
}

About

[CVPR 2022] Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning

https://snap-research.github.io/MMVID/