jacobzhaoziyuan / UDA4MIS

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Adversarial Unsupervised Domain Adaptation for Cross-modality Cardiac Image Segmentation

Abstract

Deep convolutional neural networks (DCNNs) achieve great success in medical image segmentation, but a model trained on the source domain always performs poorly on the target domain due to the severe domain shift. On the other hand, medical image annotations are costly and laborious which introduces the label scarcity problem on the source domain. Recently unsupervised domain adaptation (UDA) has become one popular topic in studies on cross-modality medical image segmentation, which aims to recover performance degradation when applying the well-trained model on one domain to unseen domains without annotations. In this work, we investigated the applications of adversarial learning on UDA, reviewed and implemented three representative adversarial unsupervised domain adaptation (AUDA) methods from different perspectives. Extensive experiments and analysis were carried out on MM-WHS 2017 dataset, demonstrating the effectiveness of adversarial image and feature adaptation on cross-modality cardiac image segmentation.

Reimplemented methods

  • Image Adaptation - CycleGAN

  • Feature Adaptation - ADDA

  • Image + Feature Adaptation - CyCADA

Setup

  1. Follow official guidance to install Pytorch.
  2. Clone the repo
  3. Install python requirements - pip install -r requirements.txt

Data Preparation

MM-WHS: Multi-Modality Whole Heart Segmentation Challenge (MM-WHS 2018) dataset http://www.sdspeople.fudan.edu.cn/zhuangxiahai/0/mmwhs/

The pre-processed data has been released from PnP-AdaNet. The training data can be downloaded here. The testing CT data can be downloaded here. The testing MR data can be downloaded here.

Image Adaptation

Image adaptation builds on the work on CycleGAN.

  1. Load in transformed images in one folder
  2. Run bash train_fcn.sh

Feature Adaptation

  1. Load in source images in one folder
  2. Run 'bash train.sh' to create baseline model
  3. Load pre-trained FCN/DRN network trained on source images
  4. Load target images in separate folder
  5. Run bash train_fcn_adda.sh

Image + Feature Adaptation

  1. Load pre-trained FCN/DRN network trained on transformed images (CycleGAN)
  2. Load in transformed images in one folder
  3. Load target images in separate folder
  4. Run bash train_fcn_adda.sh

Evaluation

To evaluate the performance of the model, Run eval_fcn_ct.py

  1. Load in trained model
  2. Specify npz folder containing test volumes

Visualization

  1. Load in trained model
  2. Specify npz folder containing test volumes
  3. Uncomment visualization portion of the code

Citation

If you find the codebase useful for your research, please cite the papers:

@inproceedings{zhu2017unpaired,
  title={Unpaired image-to-image translation using cycle-consistent adversarial networks},
  author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A},
  booktitle={Proceedings of the IEEE international conference on computer vision},
  pages={2223--2232},
  year={2017}
}

@inproceedings{tzeng2017adversarial,
  title={Adversarial discriminative domain adaptation},
  author={Tzeng, Eric and Hoffman, Judy and Saenko, Kate and Darrell, Trevor},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  pages={7167--7176},
  year={2017}
}

@inproceedings{Hoffman_cycada2017,
       authors = {Judy Hoffman and Eric Tzeng and Taesung Park and Jun-Yan Zhu,
             and Phillip Isola and Kate Saenko and Alexei A. Efros and Trevor Darrell},
       title = {CyCADA: Cycle Consistent Adversarial Domain Adaptation},
       booktitle = {International Conference on Machine Learning (ICML)},
       year = 2018
}

@inproceedings{zhao2021mt,
  title={MT-UDA: Towards Unsupervised Cross-modality Medical Image Segmentation with Limited Source Labels},
  author={Zhao, Ziyuan and Xu, Kaixin and Li, Shumeng and Zeng, Zeng and Guan, Cuntai},
  booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
  pages={293--303},
  year={2021},
  organization={Springer}
}

Acknowledgement

Part of the code is adapted from open-source codebase and original implementations of algorithms, we thank these author for their fantastic and efficient codebase:

About


Languages

Language:Python 73.5%Language:Jupyter Notebook 24.8%Language:Shell 1.7%