- (2021.2.13) Support Scaled-YOLOv4 models
- (2021.1.3) Add DIoU-NMS for YOLO (+1% MOTA)
- (2020.11.28) Docker container provided on Ubuntu 18.04
FastMOT is a custom multiple object tracker that implements:
- YOLO detector
- SSD detector
- Deep SORT + OSNet ReID
- KLT optical flow tracking
- Camera motion compensation
Deep learning models are usually the bottleneck in Deep SORT, making Deep SORT unusable for real-time applications. This repo significantly speeds up the entire system to run in real-time even on Jetson. It also provides enough flexibility to tune the speed-accuracy tradeoff without a lightweight model.
To achieve faster processing, the tracker only runs the detector and feature extractor every N frames. Optical flow is then used to fill in the gaps. I swapped the feature extractor in Deep SORT for a better ReID model, OSNet. I also added a feature to re-identify targets that moved out of frame so that the tracker can keep the same IDs. I trained YOLOv4 on CrowdHuman (82% mAP@0.5) while SSD's are pretrained COCO models from TensorFlow.
Both detector and feature extractor use the TensorRT backend and perform asynchronous inference. In addition, most algorithms, including Kalman filter, optical flow, and data association, are optimized and multithreaded using Numba.
Sequence | Density | MOTA (SSD) | MOTA (YOLOv4) | MOTA (public) | FPS |
---|---|---|---|---|---|
MOT17-13 | 5 - 30 | 19.8% | 45.6% | 41.3% | 38 |
MOT17-04 | 30 - 50 | 43.8% | 61.0% | 75.1% | 22 |
MOT17-03 | 50 - 80 | - | - | - | 15 |
Performance is evaluated with the MOT17 dataset on Jetson Xavier NX using py-motmetrics. When using public detections from MOT17, the MOTA scores are close to state-of-the-art trackers. Tracking speed can reach up to 38 FPS depending on the number of objects. On a desktop CPU/GPU, FPS should be even higher.
This means even though the tracker runs much faster, it is still highly accurate. More lightweight detector/feature extractor can potentially be used to obtain more speedup. Note that plain Deep SORT + YOLO struggles to run in real-time on most edge devices and desktop machines.
- CUDA >= 10
- cuDNN >= 7
- TensorRT >= 7
- OpenCV >= 3.3
- PyCuda
- Numpy >= 1.15
- Scipy >= 1.5
- TensorFlow < 2.0 (for SSD support)
- Numba == 0.48
- cython-bbox
Make sure to have JetPack 4.4 installed and run the script:
$ scripts/install_jetson.sh
Make sure to have nvidia-docker installed. The image requires an NVIDIA Driver version >= 450. Build and run the docker image:
$ docker build -t fastmot:latest .
$ docker run --rm --gpus all -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY fastmot:latest
This includes both pretrained OSNet, SSD, and my custom YOLOv4 ONNX model
$ scripts/download_models.sh
Modify compute
here to match your GPU compute capability for x86 PC
$ cd fastmot/plugins
$ make
Only required if you want to use SSD
$ scripts/download_data.sh
- USB Camera:
$ python3 app.py --input_uri /dev/video0 --mot
- CSI Camera:
$ python3 app.py --input_uri csi://0 --mot
- RTSP IP Camera:
$ python3 app.py --input_uri rtsp://<user>:<password>@<ip>:<port> --mot
- Video file:
$ python3 app.py --input_uri video.mp4 --mot
- Use
--gui
to visualize and--output_uri
to save output - To disable the GStreamer backend, set
WITH_GSTREAMER = False
here - Note that the first run will be slow due to Numba compilation
- More options can be configured in
cfg/mot.json
- Set
camera_size
andcamera_fps
to match your camera setting. List all settings for your camera:$ v4l2-ctl -d /dev/video0 --list-formats-ext
- To change detector, modify
detector_type
. This can be eitherYOLO
orSSD
- To change classes, set
class_ids
under the correct detector. Default class is1
, which corresponds to person - To swap model, modify
model
under a detector. For SSD, you can choose fromSSDInceptionV2
,SSDMobileNetV1
, orSSDMobileNetV2
- Note that with SSD, the detector splits a frame into tiles and processes them in batches for the best accuracy. Change
tiling_grid
to[2, 2]
,[2, 1]
, or[1, 1]
if a smaller batch size is preferred - If more accuracy is desired and processing power is not an issue, reduce
detector_frame_skip
. Similarly, increasedetector_frame_skip
to speed up tracking at the cost of accuracy. You may also want to changemax_age
such thatmax_age * detector_frame_skip
is around30-40
- Set
This repo supports multi-class tracking and thus can be easily extended to custom classes (e.g. vehicle). You need to train both YOLO and a ReID model on your object classes. Check Darknet for training YOLO and fast-reid for training ReID. After training, convert the model to ONNX format and place it under fastmot/models
. To convert YOLO to ONNX, tensorrt_demos is a great reference.
- Subclass
YOLO
like here: https://github.com/GeekAlexis/FastMOT/blob/4e946b85381ad807d5456f2ad57d1274d0e72f3d/fastmot/models/yolo.py#L94Note that anchors may not follow the same order in the Darknet cfg file. You need to mask out the anchors for each yolo layer using the indices inENGINE_PATH: path to TensorRT engine (converted at runtime) MODEL_PATH: path to ONNX model NUM_CLASSES: total number of classes LETTERBOX: keep aspect ratio when resizing For YOLOv4-csp/YOLOv4x-mish, set to True NEW_COORDS: new_coords parameter for each yolo layer For YOLOv4-csp/YOLOv4x-mish, set to True INPUT_SHAPE: input size in the format "(channel, height, width)" LAYER_FACTORS: scale factors with respect to the input size for each yolo layer For YOLOv4/YOLOv4-csp/YOLOv4x-mish, set to [8, 16, 32] For YOLOv3, set to [32, 16, 8] For YOLOv4-tiny/YOLOv3-tiny, set to [32, 16] SCALES: scale_x_y parameter for each yolo layer For YOLOv4-csp/YOLOv4x-mish, set to [2.0, 2.0, 2.0] For YOLOv4, set to [1.2, 1.1, 1.05] For YOLOv4-tiny, set to [1.05, 1.05] For YOLOv3, set to [1., 1., 1.] For YOLOv3-tiny, set to [1., 1.] ANCHORS: anchors grouped by each yolo layer
mask
in Darknet cfg. Unlike YOLOv4, the anchors are usually in reverse for YOLOv3 and tiny - Change class labels here to your object classes
- Modify
cfg/mot.json
: underyolo_detector
, setmodel
to the added Python class and setclass_ids
you want to detect. You may want to play withconf_thresh
based on the accuracy of your model
- Subclass
ReID
like here: https://github.com/GeekAlexis/FastMOT/blob/aa707888e39d59540bb70799c7b97c58851662ee/fastmot/models/reid.py#L51ENGINE_PATH: path to TensorRT engine (converted at runtime) MODEL_PATH: path to ONNX model INPUT_SHAPE: input size in the format "(channel, height, width)" OUTPUT_LAYOUT: feature dimension output by the model (e.g. 512) METRIC: distance metric used to match features ('euclidean' or 'cosine')
- Modify
cfg/mot.json
: underfeature_extractor
, setmodel
to the added Python class. You may want to play withmax_feat_cost
andmax_reid_cost
- float values from0
to2
, based on the accuracy of your model
If you find this repo useful in your project or research, please star and consider citing it:
@software{yukai_yang_2020_4294717,
author = {Yukai Yang},
title = {{FastMOT: High-Performance Multiple Object Tracking
Based on YOLO, Deep SORT, and Optical Flow}},
month = nov,
year = 2020,
publisher = {Zenodo},
version = {v1.0.0},
doi = {10.5281/zenodo.4294717},
url = {https://doi.org/10.5281/zenodo.4294717}
}