Ziang Cao and Ziyuan Huang and Liang Pan and Shiwei Zhang and Ziwei Liu and Changhong Fu
In CVPR, 2022.
[paper]
Temporal contexts among consecutive frames are far from being fully utilized in existing visual trackers. In this work, we present TCTrack, a comprehensive framework to fully exploit temporal contexts for aerial tracking. The temporal contexts are incorporated at two levels: the extraction of features and the refinement of similarity maps. Specifically, for feature extraction, an online temporally adaptive convolution is proposed to enhance the spatial features using temporal information, which is achieved by dynamically calibrating the convolution weights according to the previous frames. For similarity map refinement, we propose an adaptive temporal transformer, which first effectively encodes temporal knowledge in a memory-efficient way, before the temporal knowledge is decoded for accurate adjustment of the similarity map. TCTrack is effective and efficient: evaluation on four aerial tracking benchmarks shows its impressive performance; real-world UAV tests show its high speed of over 27 FPS on NVIDIA Jetson AGX Xavier.
The implementation of our online temporally adaptive convolution is based on TadaConv (ICLR2022).
This code has been tested on Ubuntu 18.04, Python 3.8.3, Pytorch 0.7.0/1.6.0, CUDA 10.2. Please install related libraries before running this code:
pip install -r requirements.txt
Download pretrained model by Baidu (code: 2u1l) or Googledrive and put it into tools/snapshot
directory.
Download testing datasets and put them into test_dataset
directory.
python ./tools/test.py
--dataset OTB100
--tracker_name TCTrack
--snapshot snapshot/general_model.pth # pre-train model path
The testing result will be saved in the results/dataset_name/tracker_name
directory.
Note: The results of TCTrack can be downloaded (code:kh3e).
Download pretrained model by baidu (code:dj2u) Googledrive and put it into tools/snapshot
directory.
Download testing datasets and put them into test_dataset
directory.
python ./tools/test.py # offline evaluation
--dataset OTB100
--tracker_name TCTrack++
--snapshot snapshot/general_model.pth # pre-train model path
python ./tools/test_rt.py # online evaluation
--dataset OTB100
--tracker_name TCTrack++
--snapshot snapshot/general_model.pth # pre-train model path
The testing result will be saved in the results/dataset_name/tracker_name
directory.
Note: The results of TCTrack++ can be downloaded or downloaded (code: 3vyx).
Download the datasets:
Note: train_dataset/dataset_name/readme.md
has listed detailed operations about how to generate training datasets.
To train the TCTrack and TCTrack++ model, run train.py
with the desired configs:
cd tools
python train_tctrack.py
Download the datasets:
Note: train_dataset/dataset_name/readme.md
has listed detailed operations about how to generate training datasets.
To train the TCTrack and TCTrack++ model, run train.py
with the desired configs:
cd tools
python train_tctrackpp.py
If you want to evaluate the results of our tracker, please put those results into results
directory.
python eval.py \
--tracker_path ./results \ # result path
--dataset OTB100 \ # dataset_name
--tracker_prefix 'general_model' # tracker_name
If you want to evaluate the results of our tracker, please put the pkl files into results_rt_raw
directory.
#first step
python rt_eva.py \
--raw_root ./tools/results_rt_raw/OTB100 \ # pkl path
--tar_root ./tools/results_rt/OTB100 \ # output txt files for evaluation
--gtroot ./test_dataset/OTB100 # groundtruth of dataset
# second step
python eval.py \
--tracker_path ./results_rt \ # result path
--dataset OTB100 \ # dataset_name
--trackers TCTrack++ # tracker_name
Note: The code is implemented based on pysot-toolkit. We would like to express our sincere thanks to the contributors.
@inproceedings{cao2022tctrack,
title={{TCTrack: Temporal Contexts for Aerial Tracking}},
author={Cao, Ziang and Huang, Ziyuan and Pan, Liang and Zhang, Shiwei and Liu, Ziwei and Fu, Changhong},
booktitle={CVPR},
pages={14798--14808},
year={2022}
}
The code is implemented based on pysot. We would like to express our sincere thanks to the contributors.