zonasw / FreeAnchor

FreeAnchor: Learning to Match Anchors for Visual Object Detection (NeurIPS 2019)

Home Page:https://arxiv.org/abs/1909.02466

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FreeAnchor

The Code for "FreeAnchor: Learning to Match Anchors for Visual Object Detection".

This repo is based on maskrcnn-benchmark, and FreeAnchor has also been implemented in mmdetection[link], thanks @yhcao6.

architecture

Detection performance on COCO:

Hardware Backbone Iteration Scale jittering
train / test
AP
(minival)
AP
(test-dev)
model link
4 x V100 ResNet-50-FPN 90k N / N 38.6 39.1 Google Drive
Baidu Drive
4 x V100 ResNet-101-FPN 90k N / N 41.0 41.3 Google Drive
Baidu Drive
4 x V100 ResNet-101-FPN 135k N / N 41.3 41.8 Google Drive
Baidu Drive
4 x V100 ResNeXt-101-32x8d-FPN 135k Y / N 44.2 44.8 Google Drive
Baidu Drive
Hardware Backbone Iteration Scale jittering
train / test
AP
(minival)
AP
(test-dev)
model link
8 x 2080Ti ResNet-50-FPN 90k N / N 38.4 38.9 Google Drive
Baidu Drive
8 x 2080Ti ResNet-101-FPN 90k N / N 40.4 41.1 Google Drive
Baidu Drive
8 x 2080Ti ResNet-101-FPN 135k N / N 41.1 41.5 Google Drive
Baidu Drive
8 x 2080Ti ResNeXt-101-32x8d-FPN 135k Y / N 44.2 44.9 Google Drive
Baidu Drive
Hardware Backbone Iteration Scale jittering
train / test
AP
(minival)
AP
(test-dev)
model link
8 x 2080Ti ResNet-101-FPN 180k Y / N 42.7 43.1 Google Drive
Baidu Drive

Installation

Check INSTALL.md for installation instructions.

Usage

You will need to download the COCO dataset and configure your own paths to the datasets.

For that, all you need to do is to modify maskrcnn_benchmark/config/paths_catalog.py to point to the location where your dataset is stored.

Config Files

We provide four configuration files in the configs directory.

Backbone Iteration Scale jittering
train / test
Config File
ResNet-50-FPN 90k N / N configs/free_anchor_R-50-FPN_1x.yaml
ResNet-101-FPN 90k N / N configs/free_anchor_R-101-FPN_1x.yaml
ResNet-101-FPN 135k N / N configs/free_anchor_R-101-FPN_1.5x.yaml
ResNeXt-101-32x8d-FPN 135k Y / N configs/free_anchor_X-101-FPN_j1.5x.yaml

Training with 4 GPUs (4 images per GPU)

cd path_to_free_anchor
export NGPUS=4
python -m torch.distributed.launch --nproc_per_node=$NGPUS tools/train_net.py --config-file "path/to/config/file.yaml"

Training with 8 GPUs (2 images per GPU)

cd path_to_free_anchor
export NGPUS=8
python -m torch.distributed.launch --nproc_per_node=$NGPUS tools/train_net.py --config-file "path/to/config/file.yaml"

Test on MS-COCO test-dev

cd path_to_free_anchor
python -m torch.distributed.launch --nproc_per_node=$NGPUS tools/test_net.py --config-file "path/to/config/file.yaml" MODEL.WEIGHT "path/to/.pth file" DATASETS.TEST "('coco_test-dev',)"

Evaluate NMS Recall

cd path_to_free_anchor
python  -m torch.distributed.launch --nproc_per_node=$NGPUS tools/eval_NR.py --config-file "path/to/config/file.yaml" MODEL.WEIGHT "path/to/.pth file"

Citations

Please consider citing our paper in your publications if the project helps your research.

@inproceedings{zhang2019freeanchor,
  title   =  {{FreeAnchor}: Learning to Match Anchors for Visual Object Detection},
  author  =  {Zhang, Xiaosong and Wan, Fang and Liu, Chang and Ji, Rongrong and Ye, Qixiang},
  booktitle =  {Neural Information Processing Systems},
  year    =  {2019}
}

About

FreeAnchor: Learning to Match Anchors for Visual Object Detection (NeurIPS 2019)

https://arxiv.org/abs/1909.02466

License:MIT License


Languages

Language:Python 86.8%Language:Cuda 8.1%Language:C++ 5.1%