yangcaoai / ContrastPrior

The Code of Contrast Prior and Fluid Pyramid Integration for RGBD Salient Object Detection(CVPR2019)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

For training:

  1. Clone this code by git clone https://github.com/JXingZhao/ContrastPrior.git --recursive, assume your source code directory is$ContrastPrior;

  2. Download training data (rmhn), and extract it to $ContrastPrior/data/;

  3. Build caffe with cd caffe && mkdir build && cd build && cmake .. && make -j32&& make pycaffe;

  4. Download initial model and put it into $ContrastPrior/Model/;

  5. Start to train with python run.py.

For testing:

  1. Download pretrained model $ContrastPrior/Model/;

  2. Generate saliency maps by python test.py;

  3. Run $ContrastPrior/evaluation/main.m to evaluate the saliency maps.

Pretrained models, datasets and results:

| Page | | Training Set (rmhn) | | All RGBD Datasets | | Evaluation results |

If you think this work is helpful, please cite

@inproceedings{zhao2019Contrast,

title={Contrast Prior and Fluid Pyramid Integration for RGBD Salient Object Detection},

author={Zhao, Jia-Xing and Cao, Yang and Fan, Deng-Ping and Cheng, Ming-Ming and Li, Xuan-Yi and Zhang, Le},

booktitle=CVPR,

year={2019}

}

@inproceedings{fan2017structure,

title={{Structure-measure: A New Way to Evaluate Foreground Maps}},

author={Fan, Deng-Ping and Cheng, Ming-Ming and Liu, Yun and Li, Tao and Borji, Ali},

booktitle={IEEE International Conference on Computer Vision (ICCV)},

pages = {4548-4557},

year={2017},

note={\url{http://dpfan.net/smeasure/}},

organization={IEEE}

}

About

The Code of Contrast Prior and Fluid Pyramid Integration for RGBD Salient Object Detection(CVPR2019)


Languages

Language:Jupyter Notebook 55.9%Language:C++ 34.5%Language:Python 4.2%Language:Cuda 2.9%Language:CMake 1.1%Language:MATLAB 0.6%Language:Makefile 0.3%Language:Shell 0.3%Language:CSS 0.1%Language:HTML 0.1%Language:Dockerfile 0.0%