- flownet2-pytorch
Pytorch implementation of FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks.
- pytorch-sepconv
Pytorch implementation of Video Frame Interpolation via Adaptive Separable Convolution
- Download the FlowNet2 pretrained model to
flownet2-pytorch/models
The available models are available below:
- FlowNet2[620MB] ** (Please Download this model and avoid changing the name) **
- FlowNet2-C[149MB]
- FlowNet2-CS[297MB]
- FlowNet2-CSS[445MB]
- FlowNet2-CSS-ft-sd[445MB]
- FlowNet2-S[148MB]
- FlowNet2-SD[173MB]
-
Move the downloaded model to
flownet2/models
directory while keeping its name constant asFlowNet2_checkpoint.pth.tar
. -
Run
flownet2-pytorch/install.bash
to compile the necessary libraries. -
Install the necessary libraries using pip:
$ pip3 install tensorboardX setproctitle colorama tqdm scipy pytz cvbase opencv-python
The below code can be used to run inference using a model that's stored locally and will produce a .flo
file for each two consecutive pictures in the dataset folder.
There are two demo functions in flow_model_wrapper.py
that can be used.
Each of the functions demonstrates either calculating the flow for a specific directory or for a pair of images.
$ python3 flow_model_wrapper.py
- To visualize a flow file:
# You might need to run `pip install cvbase` first
import cvbase as cvb
# to visualize a flow file
cvb.show_flow('result.flo')
- To create a random flow and visualize it:
# run `pip install cvbase` first
import cvbase as cvb
# to visualize a loaded flow map
flow = np.random.rand(100, 100, 2).astype(np.float32)
cvb.show_flow(flow)
In order for the trainer to work correctly a dataset must exist in a directory called data_set
. The directory should contain any videos that should be used in the training process.
To start the training process, run the following command:
python frvsr_trainer.py