tudelft-iv-students / offline-tracking-with-object-permanence

The open source implementation of 'Offline Tracking with Object Permanence', which aims to recover the occluded vehicle trajectories and reduce the identity switches caused by occlusions.

Home Page:https://urldefense.com/v3/__https://arxiv.org/abs/2310.01288__;!!PAKc-5URQlI!-jGFC033cdbmp9V0IVVjzUQdhLU-9mr19jKv5wCNV1w6g7wA2w7sl5lxkgA5zOS9ClAD-eL5eU82OXNkpL5k0Vhg5xwi$

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Offline Tracking with Object Permanence

Introduction

This repository contains code for "Offline Tracking with Object Permanence" by Xianzhong Liu, Holger Caesar. This project aims to recover the occluded vehicle trajectories and reduce the identity switches caused by occlusions.

Overview

A brief overview of the offline tracking model. (a) Online tracking result: Each tracklet is represented by a different color (history tracklet: red). (b) Offline Re-ID : The matched pair of tracklets are red. The unmatched ones are black. (c) Recovered trajectory.

Our model initially takes the detections from a detector as input. Then it uses an off-the-shelf online tracker to associate detections and generate initial tracklets. Here, we use CenterPoint as the online detector and initial tracker. Next, the Re-ID module tries to associate the possible future tracklets with the terminated history tracklet. If a pair of tracklets are matched, the track completion module interpolates the gap between them by predicting the location and orientation of the missing boxes. Both modules extract motion information and lane map information to produce accurate results. The model finally outputs the track with refined new identities and completes the missing segments within the tracks.

Performance

Re-ID result

We benchmarked our Re-ID result on the nuScenes test split. The result is shown below and at the leaderboard. We used CenterPoint as our base detector.

We only applied our method to vehicle tracks. For non-vehicle tracks, we keep the original CenterPoint tracking (with NMS). Therefore, results on non-vehicle classes (i.e. bicycle, motorcycle and pedestrian) should be ignored.

Re-ID Result on Test Split AMOTA (%) $\uparrow$ AMOTP (m) $\downarrow$ TP $\uparrow$ FP $\downarrow$ FN $\downarrow$ IDS $\downarrow$
CenterPoint 69.7 0.596 $\mathbf{66725}$ 12788 $\mathbf{14359}$ 340
Immortal Tracker 70.5 0.609 66511 12133 14758 $\mathbf{155}$
Offline Re-ID $\mathbf{73.4}$ $\mathbf{0.532}$ 66644 $\mathbf{11418}$ 14576 204

Table1: Re-ID evaluation on the nuScenes test split using CenterPoint detections.

By the time of submission, the model ranks 5th among lidar-based methods and 2nd among methods using CenterPoint detections (we only compare vehicle classes).

Track completion result

We show the quantitative results on the validation split over the vehicle classes. We modified the evaluation protocol so that occluded GT boxes are not filtered. We have applied our offline tracking model to multiple SOTA trackers and show the relative improvements it brings by recovering occlusions.

The track completion model theoretically interpolates non-linear trajectories between fragmented tracklets. However, the standard nuScenes evaluation first filters out the occluded GT boxes then linearly interpolates the occluded trajectories. Therefore, we modified the standard evaluation protocol and evaluate the track completion result locally on the validation split so that occluded GT boxes are retained for evaluation. Note that our models are trained and tuned on the train split. Similarly, we still focus on vehicle tracks.

Metrics AMOTA $\uparrow$ AMOTA $\uparrow$ AMOTP $\downarrow / \mathbf{m}$ AMOTP $\downarrow / \mathbf{m}$ IDS $\downarrow$ IDS $\downarrow$ Recall $\uparrow$ Recall $\uparrow$
Occlusion Recovery w/o w w/o w w/o w w/o w
CenterPoint 70.2 $\mathbf{7 2 . 4}$ 0.634 $\mathbf{0 . 6 1 5}$ 254 $\mathbf{1 8 3}$ 73.7 $\mathbf{7 4 . 5}$
SimpleTrack 70.0 $\mathbf{7 1 . 0}$ 0.668 $\mathbf{0 . 6 2 9}$ 210 $\mathbf{1 7 0}$ 72.5 $\mathbf{7 2 . 9}$
VoxelNet 69.6 $\mathbf{7 0 . 6}$ 0.710 $\mathbf{0 . 6 6 5}$ 308 $\mathbf{2 3 0}$ 72.8 $\mathbf{7 2 . 9}$
ShaSTA 72.0 $\mathbf{7 2 . 6}$ 0.612 $\mathbf{0 . 5 9 3}$ 203 $\mathbf{1 7 4}$ 73.0 $\mathbf{7 5 . 3}$

Table2: Joint evaluation on the nuScenes validation split (occludded boxes are not filtered). $\textbf{w}$: with offline Re-ID and track completion. $\textbf{w/o}$: original results without any refinement.

Visualization

Qualitative results of joint evaluation

We show visualization results of the final recovered trajectories from occlusions as blue arrows. The model takes online tracking results as inputs and performs Re-ID and track completion.

  • Rectangles: GT boxes.

  • Blue arrows: recovered box centers which are originally missing in the initial tracking result.

  • Red arrows: visible box centers in initial online tracking.

Qualitative results of solely track completion

We also visualize track completion results. The model takes the unoccluded GT tracks as inputs and inference the occluded trajectories.

Getting Started

We provide the instructions on how to install and run our project.

Installation

  1. Install Python and Anaconda (or miniconda)

  2. Clone this repository

  3. Set up a new conda environment

conda create --name offline_trk python=3.7
  1. Install dependencies
conda activate offline_trk

# nuScenes devkit
pip install nuscenes-devkit

# Pytorch: The code has been tested with Pytorch 1.7.1, CUDA 10.1, but should work with newer versions
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch

Dataset Preparation

  1. Download the nuScenes dataset. For this project we need the following.

    • Metadata for the Trainval split (v1.0)
    • Metadata for the Test split (v1.0) (Optional)
    • Map expansion pack (v1.3)
  2. Organize the nuScenes root directory as follows

└── nuScenes/
    ├── maps/
    |   ├── basemaps/
    |   ├── expansion/
    |   ├── prediction/
    |   ├── 36092f0b03a857c6a3403e25b4b7aab3.png
    |   ├── 37819e65e09e5547b8a3ceaefba56bb2.png
    |   ├── 53992ee3023e5494b90c316c183be829.png
    |   └── 93406b464a165eaba6d9de76ca09f5da.png
    ├── v1.0-trainval
    |   ├── attribute.json
    |   ├── calibrated_sensor.json
    |   ...
    |   └── visibility.json         
    └── v1.0-test (Optional)
        ├── attribute.json
        ├── calibrated_sensor.json
        ...
        └── visibility.json  

Inference with online tracking result

Generating the initial online tracking result

  1. Download the detection results in standard nuScenes submission format. (Note: the link is from CenterPoint. Any other detectors will also work as long as it fits the format.) The detection results can be saved in ./det_results/.
  2. Run the tracking script
python nusc_tracking/pub_test.py --work_dir mot_results  --checkpoint det_results/your_detection_result(json file) --version v1.0-trainval --root path/to/nuScenes/root/directory

Extract vehicle tracklets and convert to input format for Re-ID

  1. Extract vehicle tracklets
python executables/initial_extraction.py --cfg_file data_extraction/nuscenes_dataset_occ.yaml --version v1.0-test  --result_path mot_results/v1.0-test/tracking_result.json --data_root path/to/nuScenes/root/directory --tracker_name <tracker_used>
  1. Convert to Re-ID input, this may take several hours
## Slower
#python executables/nuscenes_dataset_match.py --cfg_file data_extraction/nuscenes_dataset_occ.yaml --data_root path/to/nuScenes/root/directory --tracker_name
## OR a faster way, but more requires computational resources 
bash executables/Re-ID_extraction.sh path/to/nuScenes/root/directory tracker_name

Performing Re-ID

  1. Reassociate history tracklets with future tracklets by changing the tracking ID of the future tracklets. The following command will generate the Re-ID result as a .json file, which can be evaluated directly using the standard evaluation code of nuScenes MOT.
python executables/motion_matching.py --cfg_file motion_associator/re-association.yaml --result_path mot_results/v1.0-test/tracking_result.json --data_root path/to/nuScenes/root/directory
  1. To visualize all the association result, run
python executables/motion_matching.py --cfg_file motion_associator/re-association.yaml --result_path mot_results/v1.0-test/tracking_result.json --visualize --data_root path/to/nuScenes/root/directory

The plots will be stored under ./mot_results/Re-ID_results/matching_info/v1.0-test/plots. Note that some times the detections are flipped for 180 degrees.

  • Green arrows: History tracklet.

  • Blue arrows: Future tracklets with low association scores.

  • Red arrows: Future tracklets with high association scores.

Track completion

Complete the fragmented tracks by interpolating them. To change the split version, please change the cfg file track_completion_model/track_completion.yaml. First extract the data from the previous Re-ID results

python executables/track_completion_ext.py --result_path mot_results/Re-ID_results/path/to/the/Re-ID/results.json --data_root path/to/nuScenes/root/directory

where mot_results/Re-ID_results/path/to/the/RE-ID/results.json is the path to Re-ID result.

Finally, perform track completioin over the Re-ID results. It will produce the final tracking reult under mot_results/track_completion_results.

python executables/track_completion.py --result_path mot_results/Re-ID_results/path/to/the/Re-ID/results.json --ckpt_path track_completion_model/trained_completion_model.tar --data_root path/to/nuScenes/root/directory

Training

We have already provided the trained Re-ID models under folder ./motion_associator and track completion model under folder ./track_completion_model as .tar files. Alternatively, you can also train yourself following the steps below.

  1. Run the following commands to extract pre-processed data for Re-ID. This may take several hours.
## Preprocess Re-ID data for training
python executables/preprocess.py -c configs/preprocess_match_data.yml -r path/to/nuScenes/root/directory -d path/to/directory/with/preprocessed/Re-ID/data
  1. Run the following commands to extract pre-processed data for track completion.
 ## Preprocess track completion data for training
python executables/preprocess.py -c configs/preprocess_track_completion_data.yml -r path/to/nuScenes/root/directory -d path/to/directory/with/preprocessed/track_completion/data
  1. To train the Re-ID models from scratch, run
### Train map branch
python executables/train.py -c configs/match_train_augment.yml -r path/to/nuScenes/root/directory -d path/to/directory/with/preprocessed/Re-ID/data -o motion_associator/map_branch -n 50
### Train motion branch (Optional)
python executables/train.py -c configs/configs/match_train_augment_only_motion.yml -r path/to/nuScenes/root/directory -d path/to/directory/with/preprocessed/Re-ID/data -o motion_associator/motion_branch -n 50
  1. To train the track completion model from scratch, run
python executables/train.py -c configs/track_completion.yml -r path/to/nuScenes/root/directory -d path/to/directory/with/preprocessed/track_completion/data -o track_completion_model/ -n 50
  1. The training script will save training checkpoints and tensorboard logs in the output directory. To launch tensorboard, run
tensorboard --logdir=path/to/output/directory/tensorboard_logs

Acknowledgement

This project is built upon the following opensourced projects. We sincerely express our appreciation.

Citation

Please use the following citation when referencing

@article{liu2023offline,
      title={Offline Tracking with Object Permanence}, 
      author={Xianzhong Liu and Holger Caesar},
      journal={arXiv preprint arXiv:2310.01288},
      year={2023}
}

About

The open source implementation of 'Offline Tracking with Object Permanence', which aims to recover the occluded vehicle trajectories and reduce the identity switches caused by occlusions.

https://urldefense.com/v3/__https://arxiv.org/abs/2310.01288__;!!PAKc-5URQlI!-jGFC033cdbmp9V0IVVjzUQdhLU-9mr19jKv5wCNV1w6g7wA2w7sl5lxkgA5zOS9ClAD-eL5eU82OXNkpL5k0Vhg5xwi$

License:MIT License


Languages

Language:Python 99.9%Language:Shell 0.1%