This repository provides the source code and results for the paper entilted "Unified-modal Salient Object Detection via Adaptive Prompt Learning".
arXiv version: https://arxiv.org/abs/2311.16835.
Thank you for your attention.
If you think our work is helpful, please cite
@article{wang2023unified,
title={Unified-modal Salient Object Detection via Adaptive Prompt Learning},
author={Wang, Kunpeng and Li, Chenglong and Tu, Zhengzheng and Luo, Bin},
journal={arXiv preprint arXiv:2311.16835},
year={2023}
}
The predicted RGB, RGB-D, and RGB-T saliency maps can be found here. [baidu pan fetch code: vpvt]
The pretrained parameters of our models can be found here. [baidu pan fetch code: o8yx]
- Download the datasets for training and testing from here. [baidu pan fetch code: 2sfr]
- Download the pretrained parameters of the backbone from here. [baidu pan fetch code: mad3]
- Organize dataset directories for pre-training and fine-tuning.
- Create directories for the experiment and parameter files.
- Please use
conda
to installtorch
(1.12.0) andtorchvision
(0.13.0). - Install other packages:
pip install -r requirements.txt
. - Set your path of all datasets in
./options.py
.
python -m torch.distributed.launch --nproc_per_node=2 --master_port=2024 train_parallel.py
python -m torch.distributed.launch --nproc_per_node=2 --master_port=2024 train_parallel_multi.py
python test_produce_maps.py
The implement of this project is based on the following link.
If you have any questions, please contact us (kp.wang@foxmail.com).