This repo contains our code for VisDA2020 challenge at ECCV workshop.
This work mainly solve the domain adaptive pedestrian re-identification problem by eliminishing the bias from inter-domain gap and intra-domain camera difference.
This project is mainly based on reid-strong-baseline.
- Clone the repo
git clone https://github.com/vimar-gu/Bias-Eliminate-DA-ReID.git
- Install dependencies:
- pytorch >= 1.0.0
- python >= 3.5
- torchvision
- yacs
- Prepare dataset. We modified the file names in order to read all datasets through one api. You can download the modified version in here. In addition to the original data, we also added CamStyle data to better train the model.
- We use ResNet-ibn and HRNet as backbones. ImageNet pretrained models can be downloaded in here and here.
If you want to reproduce our results, please refer to [VisDA.md]
The performance on VisDA2020 validation dataset
Method | mAP | Rank-1 | Rank-5 | Rank-10 |
---|---|---|---|---|
Basline | 30.7 | 59.7 | 77.5 | 83.3 |
+ Domain Adaptation | 44.9 | 75.3 | 86.7 | 91.0 |
+ Finetuning | 48.6 | 79.8 | 88.3 | 91.5 |
+ Post Processing | 70.9 | 86.5 | 92.8 | 94.4 |
The models can be downloaded from:
- ResNet50-ibn-a: Google Drive
- ResNet101-ibn-a: Google Drive
- ResNet50-ibn-b: Google Drive
- HRNetv2-w18: Google Drive
- ResNet50-ibn-a-large: Google Drive
- ResNet101-ibn-a-large: Google Drive
- ResNet50-ibn-b-large: Google Drive
- HRNetv2-w18-large: Google Drive
The camera models can be downloaded from:
- Camera(ResNet101): Google Drive
- Camera(ResNet152): Google Drive
- Camera(ResNet101-ibn-a): Google Drive
- Camera(HRNetv2-w18): Google Drive
- By our experience, there can be a large fluctuation of validation scores which are not completely positive correlated to the scores on testing set.
- We have fixed the random seed in the updates. But there might still be some difference due to environment.
- Multiple camera models in the testing phase may boost the performance by a little bit.