wangguanan / Bias-Eliminate-DA-ReID

Our solution to Domain Adaptive Pedestrian Re-identification in VisDA2020

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Bias Eliminate Domain Adaptive Pedestrian Re-identification

This repo contains our code for VisDA2020 challenge at ECCV workshop.

Introduction

This work mainly solve the domain adaptive pedestrian re-identification problem by eliminishing the bias from inter-domain gap and intra-domain camera difference.

This project is mainly based on reid-strong-baseline.

Get Started

  1. Clone the repo git clone https://github.com/vimar-gu/Bias-Eliminate-DA-ReID.git
  2. Install dependencies:
  • pytorch >= 1.0.0
  • python >= 3.5
  • torchvision
  • yacs
  1. Prepare dataset. We modified the file names in order to read all datasets through one api. You can download the modified version in here. In addition to the original data, we also added CamStyle data to better train the model.
  2. We use ResNet-ibn and HRNet as backbones. ImageNet pretrained models can be downloaded in here and here.

Run

If you want to reproduce our results, please refer to [VisDA.md]

Results

The performance on VisDA2020 validation dataset

Method mAP Rank-1 Rank-5 Rank-10
Basline 30.7 59.7 77.5 83.3
+ Domain Adaptation 44.9 75.3 86.7 91.0
+ Finetuning 48.6 79.8 88.3 91.5
+ Post Processing 70.9 86.5 92.8 94.4

Trained models

The models can be downloaded from:

The camera models can be downloaded from:

Some tips

  • By our experience, there can be a large fluctuation of validation scores which are not completely positive correlated to the scores on testing set.
  • We have fixed the random seed in the updates. But there might still be some difference due to environment.
  • Multiple camera models in the testing phase may boost the performance by a little bit.

About

Our solution to Domain Adaptive Pedestrian Re-identification in VisDA2020

License:MIT License


Languages

Language:Python 100.0%