Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [BCNet, CVPR 2021]
This is the official pytorch implementation of BCNet built on the open-source detectron2.
Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers
Lei Ke, Yu-Wing Tai, Chi-Keung Tang
CVPR 2021
Highlights
- BCNet: Two/one-stage (detect-then-segment) instance segmentation with state-of-the-art performance.
- Novelty: A new mask head design, explicit occlusion modeling with bilayer decouple (object boundary and mask) for the occluder and occludee in the same RoI.
- Efficacy: Large improvements both the FCOS (anchor-free) and Faster R-CNN (anchor-based) detectors.
- Simple: Small additional computation burden and easy to use.
Visualization of Occluded Objects
Qualitative instance segmentation results of our BCNet, using ResNet-101-FPN and FCOS detector. |
Results on COCO test-dev
(Check Table 8 of the paper for full results, all methods are trained on COCO train2017)
Detector(Two-stage) | Backbone | Method | mAP(mask) |
---|---|---|---|
Faster R-CNN | Res-R50-FPN | Mask R-CNN (ICCV'17) | 34.2 |
Faster R-CNN | Res-R50-FPN | PANet (CVPR'18) | 36.6 |
Faster R-CNN | Res-R50-FPN | MS R-CNN (CVPR'19) | 35.6 |
Faster R-CNN | Res-R50-FPN | PointRend (1x CVPR'20) | 36.3 |
Faster R-CNN | Res-R50-FPN | BCNet (CVPR'21) | 38.4 |
Faster R-CNN | Res-R101-FPN | Mask R-CNN (ICCV'17) | 36.1 |
Faster R-CNN | Res-R101-FPN | MS R-CNN (CVPR'19) | 38.3 |
Faster R-CNN | Res-R101-FPN | BMask R-CNN (ECCV'20) | 37.7 |
Box-free | Res-R101-FPN | SOLOv2 (NeurIPS'20) | 39.7 |
Faster R-CNN | Res-R101-FPN | BCNet (CVPR'21) | 39.8 |
Detector(One-stage) | Backbone | Method | mAP(mask) |
---|---|---|---|
FCOS | Res-R101-FPN | BlendMask (CVPR'20) | 38.4 |
FCOS | Res-R101-FPN | CenterMask (CVPR'20) | 38.3 |
FCOS | Res-R101-FPN | SipMask (ECCV'20) | 37.8 |
FCOS | Res-R101-FPN | CondInst (ECCV'20) | 39.1 |
FCOS | Res-R101-FPN | BCNet (CVPR'21) | 39.6, Pretrained Model, Submission File |
FCOS | Res-X101 FPN | BCNet (CVPR'21) | 41.2 |
Introduction
Segmenting highly-overlapping objects is challenging, because typically no distinction is made between real object contours and occlusion boundaries. Unlike previous two-stage instance segmentation methods, BCNet models image formation as composition of two overlapping image layers, where the top GCN layer detects the occluding objects (occluder) and the bottom GCN layer infers partially occluded instance (occludee). The explicit modeling of occlusion relationship with bilayer structure naturally decouples the boundaries of both the occluding and occluded instances, and considers the interaction between them during mask regression. We validate the efficacy of bilayer decoupling on both one-stage and two-stage object detectors with different backbones and network layer choices. The network of BCNet is as follows:
Step-by-step Installation
conda create -n bcnet python=3.7 -y
source activate bcnet
conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch
# FCOS and coco api and visualization dependencies
pip install ninja yacs cython matplotlib tqdm
pip install opencv-python==4.4.0.40
# Boundary dependency
pip install scikit-image
export INSTALL_DIR=$PWD
# install pycocotools. Please make sure you have installed cython.
cd $INSTALL_DIR
git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
python setup.py build_ext install
# install BCNet
cd $INSTALL_DIR
git clone https://github.com/lkeab/BCNet.git
cd BCNet/
python3 setup.py build develop
unset INSTALL_DIR
Dataset Preparation
Prepare for coco2017 dataset following this instruction. And use our converted mask annotations to replace original annotation file for bilayer decoupling training.
mkdir -p datasets/coco
ln -s /path_to_coco_dataset/annotations datasets/coco/annotations
ln -s /path_to_coco_dataset/train2017 datasets/coco/train2017
ln -s /path_to_coco_dataset/test2017 datasets/coco/test2017
ln -s /path_to_coco_dataset/val2017 datasets/coco/val2017
Multi-GPU Training and evaluation on Validation set
bash all.sh
Or
CUDA_VISIBLE_DEVICES=0,1 python3 tools/train_net.py --num-gpus 2 \
--config-file configs/fcos/fcos_imprv_R_50_FPN.yaml 2>&1 | tee log/train_log.txt
Pretrained Models
FCOS-version download: link
mkdir pretrained_models
#And put the downloaded pretrained models in this directory.
Testing on Test-dev
export PYTHONPATH=$PYTHONPATH:`pwd`
CUDA_VISIBLE_DEVICES=0,1 python3 tools/train_net.py --num-gpus 2 \
--config-file configs/fcos/fcos_imprv_R_101_FPN.yaml \
--eval-only MODEL.WEIGHTS ./pretrained_models/xxx.pth 2>&1 | tee log/test_log.txt
Visualization
bash visualize.sh
Reference script for producing bilayer mask annotation:
bash process.sh
The COCO-OCC split:
The COCO-OCC split download: link, which is detailed described in paper.
Citation
If you find BCNet useful in your research or refer to the provided baseline results, please star
@inproceedings{ke2021bcnet,
author = {Ke, Lei and Tai, Yu-Wing and Tang, Chi-Keung},
title = {Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers},
booktitle = {CVPR},
year = {2021}
}
Related Links
Youtube Video | Poster| Zhihu Reading
Related Work on partially supervised instance segmentation: CPMask
License
BCNet is released under the MIT license. See LICENSE for additional details. Thanks to the Third Party Libs detectron2.
Questions
Leave github issues or please contact 'lkeab@cse.ust.hk'