zhangsdly / PoolNet

Code for our CVPR 2019 paper "A Simple Pooling-Based Design for Real-Time Salient Object Detection"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

A Simple Pooling-Based Design for Real-Time Salient Object Detection

This is a PyTorch implementation of our CVPR 2019 paper.

Prerequisites

Update

We released our code for joint training with edge, which is also our best performance model.

Todo

Merge DSS into this repo.

Usage

1. Clone the repository

git clone https://github.com/backseason/PoolNet.git
cd PoolNet/

2. Download the datasets

Download the following datasets and unzip them into data folder.

  • MSRA-B and HKU-IS dataset. The .lst file for training is data/msrab_hkuis/msrab_hkuis_train_no_small.lst.
  • DUTS dataset. The .lst file for training is data/DUTS/DUTS-TR/train_pair.lst.
  • BSDS-PASCAL dataset. The .lst file for training is ./data/HED-BSDS_PASCAL/bsds_pascal_train_pair_r_val_r_small.lst.
  • Datasets for testing.

3. Download the pre-trained models for backbone

Download the following pre-trained models into data/pretrained folder. (Now we only provide models trained w/o edge)

4. Train

  1. Set the --train_root and --train_list path in train.sh correctly.

  2. We demo using ResNet-50 as network backbone and train with a initial lr of 5e-5 for 24 epoches, which is divided by 10 after 15 epochs.

./train.sh
  1. We demo joint training with edge using ResNet-50 as network backbone and train with a initial lr of 5e-5 for 11 epoches, which is divided by 10 after 8 epochs. Each epoch runs for 30000 iters.
./joint_train.sh
  1. After training the result model will be stored under results/run-* folder.

5. Test

For single dataset testing: * changes accordingly and --sal_mode indicates different datasets (details can be found in main.py)

python main.py --mode='test' --model='results/run-*/models/final.pth' --test_fold='results/run-*-sal-e' --sal_mode='e'

For all datasets testing used in our paper: 2 indicates the gpu to use

./forward.sh 2 main.py results/run-*

For joint training, to get salient object detection results use

./forward.sh 2 joint_main.py results/run-*

to get edge detection results use

./forward_edge.sh 2 joint_main.py results/run-*

All results saliency maps will be stored under results/run-*-sal-* folders in .png formats.

6. Pre-trained models, pre-computed results and evaluation results

We provide the pre-trained model, pre-computed saliency maps and evaluation results for:

  1. PoolNet-ResNet50 w/o edge model run-0.
  2. PoolNet-ResNet50 w/ edge model (best performance) run-1.

Note:

  1. only support bath_size=1
  2. Except for the backbone we do not use BN layer.

7. Wants to participate in the project?

You are welcome to send us your network to make this project bigger.

Please email {j04.liu, andrewhoux}@gmail.com.

If you think this work is helpful, please cite

@inproceedings{Liu2019PoolSal,
  title={A Simple Pooling-Based Design for Real-Time Salient Object Detection},
  author={Jiang-Jiang Liu and Qibin Hou and Ming-Ming Cheng and Jiashi Feng and Jianmin Jiang},
  booktitle={IEEE CVPR},
  year={2019},
}
@article{HouPami19Dss,
  title={Deeply Supervised Salient Object Detection with Short Connections},
  author={Hou, Qibin and Cheng, Ming-Ming and Hu, Xiaowei and Borji, Ali and Tu, Zhuowen and Torr, Philip},
  year  = {2019},
  volume={41},
  number={4},
  pages={815-828},
  journal={IEEE TPAMI}
}

Thanks to DSS and DSS-pytorch.

About

Code for our CVPR 2019 paper "A Simple Pooling-Based Design for Real-Time Salient Object Detection"

License:MIT License


Languages

Language:Python 97.1%Language:Shell 2.9%