by Yicheng Wu*+, Xiangde Luo+, Zhe Xu, Xiaoqing Guo, Lie Ju, Zongyuan Ge, Wenjun Liao and Jianfei Cai.
<19.03.2024> We released the codes;
<27.02.2024> The paper is accepted by CVPR 2024;
This repository is for our paper: "Diversified and Personalized Multi-rater Medical Image Segmentation". Here, we study the inherent annotation ambiguity problem in medical image segmentation and use two datasets for the model evaluation (the public LIDC-IDRI and our in-house NPC-170 datasets). We use the pre-processed LIDC-IDRI dataset as MedicalMatting. For the NPC-170 dataset, we will release it in the MMIS-2024 grand challenge of ACM MM 2024. Details will be released soon.
This repository is based on PyTorch 2.0.1+cu118 and Python 3.11.4; All experiments in our paper were conducted on a single NVIDIA GeForce 3090 GPU.
- Clone this repo.;
git clone https://github.com/ycwu1997/D-Persona.git
-
Put the data into "./dataset";
-
First-stage training;
cd ./D-Persona/code
# e.g., the LIDC-IDRI dataset
python train_dp.py --stage 1 --val_num 10 --gpu 0
- Put the first-stage weights into the "../code/";
cp ../models/[YOUR_MODEL_PATH]/DPersona1_LIDC_[IDX]_best.pth ../code/
- Second-stage training;
python train_dp.py --stage 2 --val_num 100 --gpu 0
- Test the model;
# e.g., first-stage performance on the LIDC-IDRI dataset
Python evaluate_dp.py --stage 1 --save_path ../models/[YOUR_MODEL_PATH] --test_num 50
# e.g., second-stage performance
Python evaluate_dp.py --stage 2 --save_path ../models/[YOUR_MODEL_PATH] --test_num 500
If our D-Persona model is useful for your research, please consider citing:
@inproceedings{wu2024dpersona,
title={Diversified and Personalized Multi-rater Medical Image Segmentation},
author={Wu, Yicheng and Luo, Xiangde and Xu, Zhe, and Guo, Xiaoqing and Ju, Lie and Ge, Zongyuan and Liao, Wenjun and Cai, Jianfei},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
year={2024},
organization={IEEE}
}
Our code is adapted from Pionono, MedicalMatting, and Prob. U-Net. Thanks for these authors for their valuable works and hope our model can promote the relevant research as well.
If any questions, feel free to contact me at 'ycwueli@gmail.com'