weixiong-ur / mdgan

official code of CVPR'18 paper "learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Learning to Generate Time-lapse Videos Using Multi-stage Dynamic Generative Adversarial Networks

This is the official code of the CVPR 2018 PAPER.

CVPR 2018 PAPER | Project Page | Dataset

Usage

  1. Requirements:
    • download our time-lapse dataset
    • python2.7
    • pytorch 0.3.0 or 0.3.1
    • ffmpeg
  2. Testing:
    • download our pretrained models
    • run python test.py --cuda --testf your_test_dataset_folder
  3. Sample outputs:
    • in ./sample_outputsthere are mp4 files which are generated on my machine.

Citing

@InProceedings{Xiong_2018_CVPR,
author = {Xiong, Wei and Luo, Wenhan and Ma, Lin and Liu, Wei and Luo, Jiebo},
title = {Learning to Generate Time-Lapse Videos Using Multi-Stage Dynamic
Generative Adversarial Networks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition
(CVPR)},
month = {June},
year = {2018}
}

About

official code of CVPR'18 paper "learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks"


Languages

Language:Python 99.6%Language:Shell 0.4%