zeyuxiao1997 / MEANet

Code of MEANet: Multi-Modal Edge-Aware Network for Light Field Salient Object Detection

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MEANet

Pytorch implementation for MEANet: Multi-Modal Edge-Aware Network for Light Field Salient Object Detection.

Requirements

  • Python 3.6
  • Torch 1.10.2
  • Torchvision 0.4.0
  • Cuda 10.0
  • Tensorboard 2.7.0

Usage

To Train

  • Download the training dataset and modify the 'train_data_path'.
  • Start to train with
python -m torch.distributed.launch --nproc_per_node=4 train.py 

To Test

  • Download the testing dataset and have it in the 'dataset/test/' folder.
  • Download the already-trained MEANet model and have it in the 'trained_weight/' folder.
  • Change the weight_name in test.py to the model to be evaluated.
  • Start to test with
python test.py  

Download

Trained model for testing

We released two versions of the trained model:

Trained with additional 100 samples from HFUT-Lytro on baidu pan with fetch code: 0o0r

Trained only with DUTLF-FS on baidu pan with fetch code: 75bn

Saliency map

We released two versions of the saliency map:

Trained with additional 100 samples from HFUT-Lytro on baidu pan with fetch code: x7xa or on Google drive

Trained only with DUTLF-FS on baidu pan with fetch code: s7vn

Citation

Please cite our paper if you find the work useful:

@article{JIANG202278,
title = {MEANet: Multi-modal edge-aware network for light field salient object detection},
journal = {Neurocomputing},
volume = {491},
pages = {78-90},
year = {2022},
author = {Yao Jiang and Wenbo Zhang and Keren Fu and Qijun Zhao}
}

About

Code of MEANet: Multi-Modal Edge-Aware Network for Light Field Salient Object Detection


Languages

Language:Python 100.0%