Yijunmaverick / FlowGrounded-VideoPrediction

The source code of ECCV18 'Flow-Grounded Spatial-Temporal Video Prediction from Still Images'.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FlowGrounded-VideoPrediction

Torch implementation of our ECCV18 paper on video prediction based on one single still image.

In each panel from left to right: one single starting frame and the predicted sequence (next 16 frames).

Getting started

git clone https://github.com/Yijunmaverick/FlowGrounded-VideoPrediction
cd FlowGrounded-VideoPrediction

Preparation

  • Data

    • Put the video data (e.g., .mp4 or .avi) in a folder and put it under ./datasets/DTexture/raw/.
    • Run the following command to convert videos to frames and generate the metadata for training. The testing data are prepared in the same way. Make sure that the meta data for both training and testing are ready before experiments.
cd datasets/
sh data_process.sh
cd ..
  • SPyNet

    • We use the flows estimated by the great work SPyNet as the ground truth for training. Make sure that the SPyNet code is complied successfully and works well.
  • Pretrained models

    • Run the following command to download the pretrained VGG (for perceptual loss) and our models learned on the KTH and WavingFlag data for testing.
sh download_models.sh

Training

  • Train the 3DcVAE model for flow prediction:
th train_3DcVAE.lua --dataRoot datasets/DTexture
  • Train the flow2rgb model for frame generation:
th train_flow2rgb.lua --dataRoot datasets/DTexture

Testing

  • Test two steps (prediction + generation) together:
th test.lua --dataRoot datasets/DTexture
  • With ffmpeg installed, run the following command to convert the predicted frames to a gif or video:
python gif.py

Citation

@inproceedings{Prediction-ECCV-2018,
    author = {Li, Yijun and Fang, Chen and Yang, Jimei and Wang, Zhaowen and Lu, Xin and Yang, Ming-Hsuan},
    title = {Flow-Grounded Spatial-Temporal Video Prediction from Still Images},
    booktitle = {European Conference on Computer Vision},
    year = {2018}
}

Acknowledgement

  • Codes are heavily borrowed from DrNet.

About

The source code of ECCV18 'Flow-Grounded Spatial-Temporal Video Prediction from Still Images'.


Languages

Language:Lua 41.5%Language:Cuda 15.7%Language:Makefile 14.6%Language:C 12.8%Language:CMake 10.5%Language:C++ 4.2%Language:Python 0.4%Language:Shell 0.2%