The official PyTorch implementation of FastFlowNet (ICRA 2021).
Authors: Lingtong Kong, Chunhua Shen, Jie Yang
Dense optical flow estimation plays a key role in many robotic vision tasks. It has been predicted with satisfying accuracy than traditional methods with advent of deep learning. However, current networks often occupy large number of parameters and require heavy computation costs. These drawbacks have hindered applications on power- or memory-constrained mobile devices. To deal with these challenges, in this paper, we dive into designing efficient structure for fast and accurate optical flow prediction. Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations. First, a new head enhanced pooling pyramid (HEPP) feature extractor is employed to intensify high-resolution pyramid feature while reducing parameters. Second, we introduce a novel center dense dilated correlation (CDDC) layer for constructing compact cost volume that can keep large search radius with reduced computation burden. Third, an efficient shuffle block decoder (SBD) is implanted into each pyramid level to acclerate flow estimation with marginal drops in accuracy. The overall architecture of FastFlowNet is shown as below.
Optimized by TensorRT, proposed FastFlowNet can approximate real-time inference on the Jetson TX2 development board, which represents the first real-time solution for accurate optical flow on embedded devices. For training, please refer to PWC-Net and IRR-PWC, since we use the same datasets, augmentation methods and loss functions. A demo video for real-time inference on embedded device is shown below, note that there is time delay between real motion and visualized optical flow. YouTube Video Presentation.
Experiments on both synthetic Sintel and real-world KITTI datasets demonstrate the effectiveness of proposed approaches, which consumes only 1/10 computation of comparable networks (PWC-Net and LiteFlowNet) to get 90% of their performance. In particular, FastFlowNet only contains 1.37 M parameters and runs at 90 or 5.7 fps with one desktop NVIDIA GTX 1080 Ti or embedded Jetson TX2 GPU on Sintel resolution images. Comprehensive comparisons among well-known flow architectures are listed in the following table. Times and FLOPs are measured on Sintel resolution images with PyTorch implementations.
Sintel Clean Test (AEPE) | KITTI 2015 Test (Fl-all) | Params (M) | FLOPs (G) | Time (ms) 1080Ti | Time (ms) TX2 | |
---|---|---|---|---|---|---|
FlowNet2 | 4.16 | 11.48% | 162.52 | 24836.4 | 116 | 1547 |
SPyNet | 6.64 | 35.07% | 1.20 | 149.8 | 50 | 918 |
PWC-Net | 4.39 | 9.60% | 8.75 | 90.8 | 34 | 485 |
LiteFlowNet | 4.54 | 9.38% | 5.37 | 163.5 | 55 | 907 |
FastFlowNet | 4.89 | 11.22% | 1.37 | 12.2 | 11 | 176 |
Some visual examples of our FastFlowNet on several image sequences are presented as follows.
Our original experiment environment is with CUDA 9.0, Python 3.6 and PyTorch 0.4.1. First, you should build and install the Correlation module in ./models/correlation_package/
with command below
$ python setup.py build
$ python setup.py install
./models/FastFlowNet_v1.py
is the equivalent version of ./models/FastFlowNet.py
that supports CUDA 10.x and PyTorch 1.2.0/1.3.0. For using ./models/FastFlowNet_v1.py
, you should run pip install spatial-correlation-sampler==0.2.0
.
./models/FastFlowNet_v2.py
is the equivalent version of ./models/FastFlowNet.py
that supports CUDA 10.x/11.x and PyTorch 1.6.x/1.7.x/1.8.x/1.9.x/1.10.x/1.11.x/1.12.x. For using ./models/FastFlowNet_v2.py
, you should run pip install spatial-correlation-sampler==0.4.0
.
To benchmark running speed and calculate model parameters, you can run
$ python benchmark.py
A demo for predicting optical flow given two time adjacent images, please run
$ python demo.py
Note that you can change the pre-trained models from different datasets for specific applications. The model ./checkpoints/fastflownet_ft_mix.pth
is fine-tuned on mixed Sintel and KITTI, which may obtain better generalization ability.
Support TensorRT with below configuration:
card: nvidia RTX3060Ti
driver: 470.103.01
cuda: 11.3
tensorrt: 8.0.1GA
pytorch: 1.10.2+cu113
To inference on tensorrt:
First clone tensorrt oss and copy <Proj_ROOT>/tensorrt_workspace/TensorRT
to tensorrt oss and build:
$ cp -rf ./tensorrt_workspace/TensorRT/* ${TensoRT_OSS_ROOT}/
$ cd ${TensoRT_OSS_ROOT} && mkdir build && cd build
$ cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DCUDA_VERSION=11.3
$ make -j
Second build correlation module for pytorch:
$ cd ./tensorrt_workspace/correlation_pytorch/
$ python setup.py build
Then copy the root of tensorrt plugin library libnvinfer_plugin.so into ./tensorrt_workspace/tensorrt_plugin_path
and run python ./tensorrt_workspace/fastflownet.py
to build engine, run python ./tensorrt_workspace/infr.py
to inference with tensorrt.
With fp16, FastFlowNet can run at 220FPS with input size of 512x512, and results:
To facilitate the actual deployment of FastFlowNet with TensorRT, here is a Docker TensorRT environment: https://hub.docker.com/r/pullmyleg/tensorrt8_cuda11.3_pytorch1.10.2_fastflownet.
MDFlow: Unsupervised Optical Flow Learning by Reliable Mutual Knowledge Distillation
When using any parts of the Software or the Paper in your work, please cite the following paper:
@InProceedings{Kong_2021_ICRA,
author={Kong, Lingtong and Shen, Chunhua and Yang, Jie},
title={FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation},
booktitle={2021 IEEE International Conference on Robotics and Automation (ICRA)},
year={2021}
}