Frame interpolation via adaptive separable convolution, adversarial training and post-processing pixel shaders
This project picks up from the work of the "Video frame interpolation via adaptive separable convolution" paper (here), using a similar network with the same separable convolution operation to generate the output frames. In addition to that, the network training has been modified with the addition of a GAN component, using a dedicated fully connected stack on top of a pretrained VGG-19 network. The cost function has also been altered to include an L1 factor, and a custom pixel shader has been applied to the output frames to try to shift some specific work away from the network itself, giving more room for the available weights to focus on the motion flow estimation task.
DISCLAIMER: this repository is provided as is, and it's no longer being actively maintained. This project was developed during a university course and it hasn't been officially presented as a research paper. The goal of this repository is just to share the work I've been doing in this area instead of just keeping it private and to possibly act as reference for anyone interested in having a look.