pinglmlcv / person-reid-3d

:statue_of_liberty: Person Re-identification in the 3D Space :statue_of_liberty:

Home Page:https://arxiv.org/abs/2006.04569

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Person Re-id in the 3D Space

Python 3.6 License: MIT

Thanks for your attention. In this repo, we provide the code for the paper [Person Re-identification in the 3D Space ].

News

  • You may directly download my generated 3D data of the Market-1501 dataset at [OneDrive] or [GoogleDrive], and therefore you could skip the data preparation part.

Prerequisites

  • Python 3.6 or 3.7
  • GPU Memory >= 4G (e.g., GTX1080)
  • Pytorch = 1.4.0 (Not Latest. Latest version is incompatible, since it changes the C++ interfaces.)
  • dgl

Install

Here I use the cuda10.1 by default.

conda create --name OG python=3.7
conda activate OG
conda install pytorch=1.4.0 torchvision=0.5.0 cudatoolkit=10.1 -c pytorch
pip install dgl-cu101
pip install -r requirements.txt

Prepare Data

  • You may directly download my generated 3D data of the Market-1501 dataset at [OneDrive] or [GoogleDrive], and therefore you could skip the data preparation part.

Download Market-1501, DukeMTMC-reID or MSMT17 and unzip them in the ../

Split the dataset and arrange them in the folder of ID.

python prepare_market.py
python prepare_duke.py
python prepare_MSMT.py

Link the 2DDataset

ln -s ../Market/pytorch  ./2DMarket
ln -s ../Duke/pytorch  ./2DDuke
ln -s ../MSMT/pytorch  ./2DMSMT

Generate the 3D data via the code at https://github.com/layumi/hmr (I modified the code from https://github.com/akanazawa/hmr and added 2D-to-3D color mapping.)

Training

  • Market-1501

OG-Net

python train_M.py --batch-size 8 --name ALL_Dense_b8_lr3.5_flip_slim0.5_warm5_scale_e0_d7+bg_adam_init768_clusterXYZRGB --slim 0.5 --flip --scale  --lrRate 3.5e-4 --gpu_ids 0 --warm_epoch 5  --erase 0  --droprate 0.7   --use_dense  --bg   --adam  --init 768  --cluster xyzrgb  --train_all

OG-Net-Small

python train_M.py --batch-size 8 --name ALL_SDense_b8_lr3.5_flip_slim0.5_warm5_scale_e0_d7+bg_adam_init768_clusterXYZRGB --slim 0.5 --flip --scale  --lrRate 3.5e-4 --gpu_ids 0 --warm_epoch 5  --erase 0  --droprate 0.7   --use_dense  --bg   --adam  --init 768  --cluster xyzrgb  --train_all     --feature_dims 48,96,192,384
  • DukeMTMC-reID

OG-Net

python train_M.py --batch-size 8 --name ALL_Duke_Dense_b8_lr3.5_flip_slim0.5_warm5_scale_e0_d7+bg_adam_init768_clusterXYZRGB --slim 0.5 --flip --scale  --lrRate 3.5e-4 --gpu_ids 0 --warm_epoch 5  --erase 0  --droprate 0.7   --use_dense  --bg   --adam  --init 768  --cluster xyzrgb  --dataset-path 2DDuke  --train_all

OG-Net-Small

python train_M.py --batch-size 8 --name ALL_Duke_SDense_b8_lr3.5_flip_slim0.5_warm5_scale_e0_d7+bg_adam_init768_clusterXYZRGB --slim 0.5 --flip --scale  --lrRate 3.5e-4 --gpu_ids 0 --warm_epoch 5  --erase 0  --droprate 0.7   --use_dense  --bg   --adam  --init 768  --cluster xyzrgb  --train_all    --feature_dims 48,96,192,384 --dataset-path 2DDuke
  • MSMT-17

OG-Net

python train_M.py --batch-size 8 --name MSMT_Dense_b8_lr3.5_flip_slim0.5_warm5_scale_e0_d7+bg_adam_init768_clusterXYZRGB --slim 0.5 --flip --scale  --lrRate 3.5e-4 --gpu_ids 0 --warm_epoch 5  --erase 0  --droprate 0.7   --use_dense  --bg   --adam  --init 768  --cluster xyzrgb  --dataset-path 2DMSMT

OG-Net-Small

python train_M.py --batch-size 8 --name ALL_MSMT_SDense_b8_lr3.5_flip_slim0.5_warm5_scale_e0_d5+bg_adam_init768_clusterXYZRGB --slim 0.5 --flip --scale  --lrRate 3.5e-4 --gpu_ids 0 --warm_epoch 5  --erase 0  --droprate 0.5   --use_dense  --bg   --adam  --init 768  --cluster xyzrgb  --dataset-path 2DMSMT  --train_all  --feature_dims 48,96,192,384

Evaluation

  • Market-1501
python test_M.py  --name  ALL_Dense_b8_lr3.5_flip_slim0.5_warm5_scale_e0_d7+bg_adam_init768_clusterXYZRGB
  • DukeMTMC-reID
python test_M.py  --data 2DDuke --name  ALL_Duke_Dense_b8_lr3.5_flip_slim0.5_warm5_scale_e0_d7+bg_adam_init768_clusterXYZRGB
  • MSMT-17
python test_MSMT.py  --name MSMT_Dense_b8_lr3.5_flip_slim0.5_warm5_scale_e0_d7+bg_adam_init768_clusterXYZRGB

Pre-trained Models

Since OG-Net is really small, I has included trained models in this github repo ./snapshot.

Results

[Person Re-ID Performance]

Model name Market Duke MSMT
OG-Net-Small 80.85(59.56) 70.11(49.93) 34.87(14.57)
OG-Net 80.94(59.97) 71.77(50.81) 36.37(15.74)

[ModelNet Performance]

I add OG-Net code to https://github.com/layumi/dgcnn
Results on ModelNet are 92.02 Top1 Accuracy / 88.84 MeanClass Top1 Accuracy.

Citation

You may cite it in your paper. Thanks a lot.

@article{zheng2020person,
  title={Person Re-identification in the 3D Space},
  author={Zhedong Zheng, Yi Yang},
  journal={arXiv 2006.04569},
  year={2020}
}

Related Work

We thank the great works of hmr, DGL, DGCNN and PointNet++. You may check their code at

The baseline models used in the paper are modified from:

Acknowledge

I would like to thank the helpful comments and suggestions from Yaxiong Wang, Yuhang Ding, Qian Liu, Chuchu Han, Tianqi Tang, Zonghan Wu and Qipeng Guo.

About

:statue_of_liberty: Person Re-identification in the 3D Space :statue_of_liberty:

https://arxiv.org/abs/2006.04569

License:MIT License


Languages

Language:Python 70.3%Language:Cuda 16.7%Language:C++ 12.8%Language:Objective-C 0.2%