jacobzhaoziyuan / Meta-Hallucinator

[MICCAI 2022] Official Implementation for "Meta-hallucinator: Towards few-shot cross-modality cardiac image segmentation"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Meta-hallucinator: Towards Few-Shot Cross-Modality Cardiac Image Segmentation

MICCAI2022 MICCAI2022

Pytorch implementation of our method for MICCAI2022 paper: "Meta-hallucinator: Towards Few-Shot Cross-Modality Cardiac Image Segmentation". (Our code will be release soon.)

Abstract

Domain shift and label scarcity heavily limit deep learning applications to various medical image analysis tasks. Unsupervised domain adaptation (UDA) techniques have recently achieved promising cross-modality medical image segmentation by transferring knowledge from a label-rich source domain to an unlabeled target domain. However, it is also difficult to collect annotations from the source domain in many clinical applications, rendering most prior works suboptimal with the label-scarce source domain, particularly for few-shot scenarios, where only a few source labels are accessible. To achieve efficient few-shot cross-modality segmentation, we propose a novel transformation-consistent meta-hallucination framework, meta-hallucinator, with the goal of learning to diversify data distributions and generate useful examples for enhancing cross-modality performance. In our framework, hallucination and segmentation models are jointly trained with the gradient-based meta-learning strategy to synthesize examples that lead to good segmentation performance on the target domain. To further facilitate data hallucination and cross-domain knowledge transfer, we develop a self-ensembling model with a hallucination-consistent property. Our meta-hallucinator can seamlessly collaborate with the meta-segmenter for learning to hallucinate with mutual benefits from a combined view of meta-learning and self-ensembling learning. Extensive studies on MM-WHS 2017 dataset for cross-modality cardiac segmentation demonstrate that our method performs favorably against various approaches by a lot in the few-shot UDA scenario.

Citation

If you find the codebase useful for your research, please cite the paper:

@inproceedings{zhao2022meta,
  title={Meta-hallucinator: Towards Few-Shot Cross-Modality Cardiac Image Segmentation},
  author={Zhao, Ziyuan and Zhou, Fangcheng and Zeng, Zeng and Guan, Cuntai and Zhou, S. Kevin},
  booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
  pages={128--139},
  year={2022},
  organization={Springer}
}

About

[MICCAI 2022] Official Implementation for "Meta-hallucinator: Towards few-shot cross-modality cardiac image segmentation"

License:MIT License