superjeary / SiamTrackers

(2020-2021)The PyTorch version of SiamFC,SiamRPN,DaSiamRPN, UpdateNet , SiamDW, SiamRPN++, SiamMask, SiamFC++, SiamCar, SiamBAN, Ocean, LightTrack ; Visual object tracking based on deep learning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SiamTrackers

Description

Year Conf Trackers Debug Train Test Data Set Toolkit Source
VID DET COCO YTB GOT LaSOT
2016 ECCV SiamFC got10k unofficial
SiamFC got10k unofficial
2018 CVPR SiamRPN got10k unofficial
SiamRPN got10k unofficial
2018 ECCV DaSiamRPN pysot official
DaSiamRPN pysot unofficial
2019 ICCV UpdateNet(FC) pysot unofficial
UpdateNet(UP) pysot unofficial
UpdateNet(DA) pysot official
UpdateNet(DW) pysot unofficial
2019 CVPR SiamDW(FC) got10k unofficial
SiamDW(UP) got10k unofficial
2019 CVPR SiamRPNpp(DW) pysot official
SiamRPNpp(DW) pysot unofficial
SiamRPNpp(UP) pysot unofficial
SiamRPNpp(DA) pysot unofficial
SiamRPNpp(ResNet) pysot official
2019 CVPR SiamMask pysot official
2020 AAAI SiamFCpp pysot&got10k official
SiamFCpp pysot&got10k unofficial
SiamFCpp(GoogleNet) pysot&got10k official
The implementation of simple face classification based on siamese network.
Add GOT10K toolkit and optimize the interface. 

We use the VID data set for training . 

The testing results are slightly lower than the paper(without hyperparameter adjustment). 
Add GOT10K toolkit and optimize the interface. 

We use YTB and VID  data sets for training. 

The testing results are slightly lower than the paper(without hyperparameter adjustment). 
Add PYSOT toolkit and optimize the interface. 

You can  debug, train and test easily.  

The results of testing are consistent with the paper.

Note that you shound have python3  environment.
Add PYSOT toolkit and optimize the interface. 

The model is sensitive to learning rate. 

Our results is higher than the original paper on VOT2018 dataset. EAO=0.403(Ours)  EAO=0.393(Paper)
The paper mainly analyzed the impact of padding on the tracking network. 
Support VScode single-step debugging.

Add test scripts for 4 drone datasets.

Change distributed multi-machine multi-GPU parallel to single-machine multi-GPU parallel.

Train SiamRPNpp AlexNet version using four datasets (training time is  3~4 days with 2 1080 GPUs ).
Support VScode single-step debugging.

Support testing and training.

The results of my test are  inconsistent with the author's, please refer to my SiamMask branch.
Support VScode single-step debugging.

Add test scripts for 4 drone datasets.

Use  GOT10K data set to retrain the AlexNet version, the training time is 15~20 hours (2 1080 GPUs).
Support VScode single-step debugging.

Experiment

  • GPU NVIDIA 1080 8G x 2
  • CPU Intel® Xeon(R) CPU E5-2650 v4 @ 2.20GHz × 24
  • CUDA 9.0
  • Ubuntu 16.04
  • PyTorch 1.1.0
  • Python 3.7.3

Due to the limitation of computer configuration, i only choose some high speed algorithms for training and testing on several small tracking datasets

Trackers SiamFC SiamRPN SiamRPN DaSiamRPN DaSiamRPN SiamRPNpp SiamRPNpp SiamRPNpp SiamRPNpp SiamFCpp SiamFCpp
Train Set GOT official GOT official official official GOT GOT GOT GOT official
Backbone AlexNet AlexNet AlexNet AlexNet AlexNet-DA AlexNet-DW AlexNet-DW AlexNet-UP AlexNet-DA AlexNet AlexNet
FPS 85 >120 >120 >120 >120 >120 >120 >120 >120 >120 >120
OTB100 AUC 0.589 0.637 0.603 0.655 0.646 0.648 0.623 0.619 0.634 0.629 0.680
DP 0.794 0.851 0.820 0.880 0.859 0.853 0.837 0.823 0.846 0.830 0.884
UAV123 AUC 0.504 0.527 0.586 0.604 0.578 0.623
DP 0.702 0.748 0.796 0.801 0.769 0.781
UAV20L AUC 0.410 0.454 0.524 0.530 0.516
DP 0.566 0.617 0.691 0.695 0.613
DTB70 AUC 0.487 0.554 0.588 0.639
DP 0.735 0.766 0.797 0.826
UAVDT AUC 0.451 0.593 0.566 0.632
DP 0.710 0.836 0.793 0.846
VisDrone-Train AUC 0.510 0.547 0.572 0.588
DP 0.698 0.722 0.764 0.784
VOT2016 A 0.538 0.56 0.61 0.625 0.618 0.582 0.592 0.612 0.626
R 0.424 0.26 0.22 0.224 0.238 0.266 0.252 0.266 0.144
E 0.262 0.344 0.411 0.439 0.393 0.372 0.365 0.357 0.460
Lost 91 48 51 57 54 57 31
VOT2018 A 0.501 0.49 0.56 0.586 0.576 0.563 0.555 0.557 0.584 0.577
R 0.534 0.46 0.34 0.276 0.290 0.375 0.384 0.412 0.342 0.183
E 0.223 0.244 0.326 0.383 0.352 0.300 0.292 0.275 0.308 0.385
Lost 114 59 62 80 82 88 73 39

Dataset

  • All json files BaiduYun parrword: xm5w (The json files are provided by pysot)

  • OTB2015 BaiduYun password: t5i1

  • VOT2016 BaiduYun password: v7vq

  • VOT2018 BaiduYun password: e5eh

  • VOT2019 BaiduYun password: p4fi

  • VOT2020 BaiduYun password: x93i

  • UAV123 BaiduYun password: 2iq4

  • DTB70 BaiduYun password: e7qm

  • UAVDT BaiduYun password: keva

  • VisDrone2019 BaiduYun password: yxb6

  • TColor128 BaiduYun password: 26d4

  • NFS BaiduYun password: vng1

  • GOT10k BaiduYun password: uxds (SiamFC-GOT, SiamRPN-GOT, SiamDW, SiamFCpp)

  • LaSOT BaiduYun password: ygtx (SiamDW, SiamFCpp)

  • YTB&VID BaiduYun password: 6vkz (SiamRPN)

  • ILSVRC2015 VID BaiDuYun password: uqzj (SiamFC, SiamRPNpp, SiamMask, siamdw, SiamFCpp)

  • ILSVRC2015 DET BaiDuYun password: 6fu7 (SiamRPNpp, SiamMask, SiamDW, SiamFCpp)

  • YTB-Crop511 BaiduYun password: ebq1 (SiamRPNpp, SiamMask, SiamDW,SiamFCpp)

  • COCO BaiduYun password: ggya (SiamRPNpp, SiamMask, SiamDW, SiamFCpp)

  • YTB-VOS BaiduYun password: sf1m (SiamMask)

  • DAVIS2017 BaiduYun password: c9qp (SiamMask)

  • TrackingNet BaiduYun password: nkb9 (Note that this link is provided by SiamFCpp author) (SiamFCpp)

Toolkit

Matlab version

Python version

  • pysot-toolkit: OTB, VOT, UAV, NfS, LaSOT are supported.BaiduYun password: 2t2q

  • got10k-toolkit:GOT-10k, OTB, VOT, UAV, TColor, DTB, NfS, LaSOT and TrackingNet are supported.BaiduYun password: vsar

Papers

BaiduYun password: fukj

Reference

[1] SiamFC

Bertinetto L, Valmadre J, Henriques J F, et al. Fully-convolutional siamese networks for object tracking.European conference on computer vision. Springer, Cham, 2016: 850-865.
   
[2] SiamRPN

Li B, Yan J, Wu W, et al. High performance visual tracking with siamese region proposal network.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 8971-8980.

[3] DaSiamRPN

Zhu Z, Wang Q, Li B, et al. Distractor-aware siamese networks for visual object tracking.Proceedings of the European Conference on Computer Vision (ECCV). 2018: 101-117.

[4] UpdateNet

Zhang L, Gonzalez-Garcia A, Weijer J, et al. Learning the Model Update for Siamese Trackers. Proceedings of the IEEE International Conference on Computer Vision. 2019: 4010-4019.
   
[5] SiamDW

Zhang Z, Peng H. Deeper and wider siamese networks for real-time visual tracking. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 4591-4600.

[6] SiamRPNpp

Li B, Wu W, Wang Q, et al. SiamRPNpp: Evolution of siamese visual tracking with very deep networks.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 4282-4291.

[7] SiamMask

Wang Q, Zhang L, Bertinetto L, et al. Fast online object tracking and segmentation: A unifying approach. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 1328-1338.
   
[8] SiamFCpp

Xu Y, Wang Z, Li Z, et al. SiamFCpp: Towards Robust and Accurate Visual Tracking with Target Estimation Guidelines. AAAI, 2020.

[9] SiamCAR
Guo D ,  Wang J ,  Cui Y , et al. SiamCAR: Siamese Fully Convolutional Classification and Regression for Visual Tracking. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2020.

About

(2020-2021)The PyTorch version of SiamFC,SiamRPN,DaSiamRPN, UpdateNet , SiamDW, SiamRPN++, SiamMask, SiamFC++, SiamCar, SiamBAN, Ocean, LightTrack ; Visual object tracking based on deep learning


Languages

Language:Python 94.0%Language:C 5.7%Language:Shell 0.2%