shengzhang90 / BayeSeg

The official implementation of "Joint Modeling of Image and Label Statistics for Enhancing Model Generalizability of Medical Image Segmentation" via Pytorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

BayeSeg

The official implementation of "Joint Modeling of Image and Label Statistics for Enhancing Model Generalizability of Medical Image Segmentation", which has been accepted by MICCAI 2022.

Content

Dependencies

BayeSeg was implemented on Ubuntu 16.04 with Python 3.6. Before training and test, please create an environment via Anaconda (suppose it has been installed on your computer), and install pytorch 1.10.2, as follows,

conda create -n BayeSeg python=3.6
source activate BayeSeg
conda install torch==1.10.2

Besides, please install other packages using pip install -r requirements.txt.

Quick test

BayeSeg was tested on the public datasets from MICCAI 2017 ACDC and MICCAI 2019 MS-CMRSeg. For ACDC, all training cases were used for test. For MS-CMRSeg, 15 cases were randomly selected for test.

Datasets/Models Parameters BaiduPan OneDrive
ACDC - link link
MS-CMRSeg - link s4t8 link
Unet 25.8M link 1zgr link
PUnet 5.0M link 07rm link
Baseline 26.9M link 1i7y link
BayeSeg 26.9M link 0an5 link
  • ACDC comes from MICCAI 2017 ACDC, one needs to download it from its official homepage.
  • MS-CMRSeg.zip contains three folders, i.e., train, val, and test.
    • train contains 25 subjects randomly selected from LGE CMR of MS-CMRSeg
    • val contains 5 subjects randomly selected from LGE CMR of MS-CMRSeg
    • test contains three sequences, i.e., C0 (bSSFP CMR), LGR (LGE CMR), and T2 (T2-weighted CMR), and each sequence consists of 15 subjects randomly selected from MS-CMRSeg.
  • Unet.zip contains the checkpoint of U-Net model, which was trained on LGE CMR using cross-entropy.
  • PUnet.zip contains the checkpoint of PU-Net model, which was trained on LGE CMR using its default loss.
  • Baseline.zip contains the checkpoint of Baseline model, which was trained on LGE CMR only using cross-entropy.
  • BayeSeg.zip contains the checkpoint of BayeSeg model, which was trained on LGE CMR using an additional variational loss.

We have provided the script of testing U-Net, PU-Net, Baseline, and BayeSeg in demo.sh. Please start testing these models as follows.

The setting of test directory is defined in inference.py as follows,

if dataset in ['MSCMR', 'ACDC']:
    test_folder = "../Datasets/{}/test/{}/images/".format(dataset, sequence)
    label_folder = "../Datasets/{}/test/{}/labels/".format(dataset, sequence)
else:
    raise ValueError('Invalid dataset: {}'.format(dataset))

For ACDC, one need to download this dataset from its homepage, and then prepare test data as above.

To test the performance of U-Net, PU-Net, Baseline, and BayeSeg on the LGE CMR of MS-CMRSeg, please uncomment the corresponding line in demo.sh, and then run sh demo.sh.

# test Unet
# CUDA_VISIBLE_DEVICES=0 python -u main.py --model Unet --eval --dataset MSCMR --sequence LGR --resume logs/Unet/checkpoint.pth --output_dir results --device cuda

# test PUnet
# CUDA_VISIBLE_DEVICES=0 python -u main.py --model PUnet --eval --dataset MSCMR --sequence LGR --resume logs/PUnet/checkpoint.pth --output_dir results --device cuda

# test baseline
# CUDA_VISIBLE_DEVICES=0 python -u main.py --model Baseline --eval --dataset MSCMR --sequence LGR --resume logs/Baseline/checkpoint.pth --output_dir results --device cuda

# test BayeSeg
# CUDA_VISIBLE_DEVICES=0 python -u main.py --model BayeSeg --eval --dataset MSCMR --sequence LGR --resume logs/BayeSeg/checkpoint.pth --output_dir results --device cuda

Here, --sequence can be set to C0, LGR, or T2 for MS-CMRSeg, and C0 for ACDC. For example, to test the cross-sequence segmentation performance of U-Net, PU-Net, Baseline, and BayeSeg on the T2-weighted CMR of MS-CMRSeg, please set --sequence LGR to --sequence T2.

How to train

All models were trained using LGE CMR of MS-CMRSeg, and the root of training data is defined in data/mscmr.py as follows,

root = Path('your/dataset/directory' + args.dataset)

Please replace your/dataset/directory with your own directory.

To train U-Net, PU-Net, Baseline, and BayeSeg, please uncomment the corresponding line in demo.sh, and run sh demo.sh.

# train Unet
# CUDA_VISIBLE_DEVICES=0 python -u main.py --model Unet --batch_size 8 --output_dir logs/Unet --device cuda

# train PUnet
# CUDA_VISIBLE_DEVICES=0 python -u main.py --model PUnet --batch_size 8 --output_dir logs/PUnet --device cuda

# train Baseline
# CUDA_VISIBLE_DEVICES=0 python -u main.py --model Baseline --batch_size 8 --output_dir logs/Baseline --device cuda

# train BayeSeg
# CUDA_VISIBLE_DEVICES=0 python -u main.py --model BayeSeg --batch_size 8 --output_dir logs/BayeSeg --device cuda

Citation

If our work is helpful in your research, please cite this as follows.

[1] S. Gao, H. Zhou, Y. Gao, and X. Zhuang, "Joint Modeling of Image and Label Statistics for Enhancing Model Generalizability of Medical Image Segmentation," arXiv e-print, arXiv:2206.04336, 2022. [arXiv] [MICCAI]

@Article{Gao/BayeSeg/2022,
	title =	 {Joint Modeling of Image and Label Statistics for Enhancing Model Generalizability of Medical Image Segmentation},
	author = {Gao, Shangqi and Zhou, Hangqi and Gao, Yibo and Zhuang, Xiahai},
  journal = {	arXiv e-print, arXiv:2206.04336},
  year = 2022
}

Don't hesitate to contact us via shqgao@163.com or zxh@fudan.edu.cn, if you have any questions.

About

The official implementation of "Joint Modeling of Image and Label Statistics for Enhancing Model Generalizability of Medical Image Segmentation" via Pytorch

License:MIT License


Languages

Language:Python 98.9%Language:Shell 1.1%