d-li14 / axial-deeplab

This is a PyTorch re-implementation of Axial-DeepLab (ECCV 2020 Spotlight)

Home Page:https://arxiv.org/abs/2003.07853

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Axial-DeepLab (ECCV 2020, Spotlight)

This is an on-going PyTorch re-implementation of the Axial-DeepLab paper. The re-implementation is mainly done by an amazing junior student, Huaijin Pi.

@inproceedings{wang2020axial,
  title={Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation},
  author={Wang, Huiyu and Zhu, Yukun and Green, Bradley and Adam, Hartwig and Yuille, Alan and Chen, Liang-Chieh},
  booktitle={European Conference on Computer Vision (ECCV)},
  year={2020}
}

Currently, only ImageNet classification with the "Conv-Stem + Axial-Attention" backbone is supported. If you are interested in contributing to this repo, please open an issue and we can further discuss.

Preparation

pip install tensorboardX
mkdir data
cd data
ln -s path/to/dataset imagenet

Training

  • Non-distributed training
python train.py --model axial50s --gpu_id 0,1,2,3 --batch_size 128 --val_batch_size 128 --name axial50s --lr 0.05 --nesterov
  • Distributed training
CUDA_VISIBLE_DEVICES=0,1,2,3 python dist_train.py --model axial50s --batch_size 128 --val_batch_size 128 --name axial50s --lr 0.05 --nesterov --dist-url 'tcp://127.0.0.1:4128' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0

You can change the model name to train different models.

Testing

python train.py --model axial50s --gpu_id 0,1,2,3 --batch_size 128 --val_batch_size 128 --name axial50s --lr 0.05 --nesterov --test

You can test with distributed settings in the same way.

Credits

About

This is a PyTorch re-implementation of Axial-DeepLab (ECCV 2020 Spotlight)

https://arxiv.org/abs/2003.07853


Languages

Language:Python 100.0%