rubenvillegas / iclr2017mcnet

Tensorflow implementation of the ICLR 2017 paper: Decomposing Motion and Content for Natural Video Sequence Prediction

Home Page:https://sites.google.com/a/umich.edu/rubenevillegas/iclr2017

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Training/Testing mcnet for different values of K and T

sharathyadav1993 opened this issue · comments

@rubenvillegas , Is it possible to train and test the mcnet for different values of K and T. I have trained and tested for RGB dataset for the given parameters of K=4 and T=1, but when I change the K and T parameters, the model is not building, do you suggest us to do any configurations to the model.

Yes. If you read the paper, I train with one K and T, and then test with a longer T. You shouldn't have to do any configuration change other than change the K/T. If it's not working, then something may be wrong on your side.

@rubenvillegas, Thank you for your response. As you suggested I could able to do training with one value of K and T, and perform testing by extending T. I tried with 1000 iterations and very small size data set, hence the prediction is not accurate.

Could you suggest to me the optimum number of videos in training data and also the number of iterations during training which would provide the best results?

The bigger the dataset, the better. For number of iterations, you would have to do validation to figure this out. I cannot come up with a reliable response for this of the top of my head without running experiments with your data myself.