This is the code for the paper "Learning a Proposal Classifier for Multiple Object tracking"
in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
Paper: arXiv
NOTE: This is not the final version.
@inproceedings{dai2021LPC,
title={Learning a Proposal Classifier for Multiple Object tracking},
author={Dai, Peng and Weng, Renliang and Choi, Wongun and Zhang, Changshui and He, Zhangping and Ding, Wei}
booktitle=IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
year=2021
}
- Clone the enter this repository:
git clone https://github.com/daip13/LPC_MOT.git
-
Create a docker image for this project:
- Python = 3.7.7
- PyTorch = 1.4.0+cu100
- Notice: We also provide the docker image Baidu (code: lq3v) to run our codes.
-
Copy the LPC_MOT repository to the root path of the docker image.
-
Download our GCN and reid network.
- The models can also be downloaded Baidu (code: lq3v).
- You should place the models to path /root/LPC_MOT/models/
- Notice: we adopt the fast-reid as our reid model. However, the authors have updated their codes. In order to get the same reid features with our trained model, we also present the codes that we used here.
-
(OPTIONAL) For convenience, we provide the detections files with extracted reid features. You can also download them Baidu (code: lq3v).
- You should place the downloaded data to /root/LPC_MOT/dataset/
- If you donot want to download the data, you can also generate it with the script ReID_feature_extraction.py
-
Download the MOT17 dataset and place it to path /root/LPC_MOT/dataset/.
-
Running.
cd /root/LPC_MOT/learnable_proposal_classifier/scripts/
bash main.sh ../../dataset/MOT17/results_reid_with_traindata/detection/ ../../models/dsgcn_model_iter_100.pth /tmp/LPC_MHT/ ../../dataset/MOT17/results_reid_with_traindata/tracking_output/ ../../dataset/MOT17/train/
Please refer to LPC_TRAIN for details.