jacobzhaoziyuan / DSAL

[IEEE JBHI 2021] Official Implementation for "DSAL: Deeply Supervised Active Learning From Strong and Weak Labelers for Biomedical Image Segmentation"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

DSAL: Deeply Supervised Active Learning from Strong and Weak Labelers for Biomedical Image Segmentation

JBHI2021 JBHI2021

Keras implementation of our method for IEEE JBHI 2021 paper: "DSAL: Deeply Supervised Active Learning from Strong and Weak Labelers for Biomedical Image Segmentation".

Contents

Abstract

Image segmentation is one of the most essential biomedical image processing problems for different imaging modalities, including microscopy and X-ray in the Internet-of-Medical-Things (IoMT) domain. However, annotating biomedical images is knowledge-driven, time-consuming, and labor-intensive, making it difficult to obtain abundant labels with limited costs. Active learning strategies come into ease the burden of human annotation, which queries only a subset of training data for annotation. Despite receiving attention, most of active learning methods still require huge computational costs and utilize unlabeled data inefficiently. They also tend to ignore the intermediate knowledge within networks. In this work, we propose a deep active semi-supervised learning framework, DSAL, combining active learning and semi-supervised learning strategies. In DSAL, a new criterion based on deep supervision mechanism is proposed to select informative samples with high uncertainties and low uncertainties for strong labelers and weak labelers respectively. The internal criterion leverages the disagreement of intermediate features within the deep learning network for active sample selection, which subsequently reduces the computational costs. We use the proposed criteria to select samples for strong and weak labelers to produce oracle labels and pseudo labels simultaneously at each active learning iteration in an ensemble learning manner, which can be examined with IoMT Platform. Extensive experiments on multiple medical image datasets demonstrate the superiority of the proposed method over state-of-the-art active learning methods.

Dataset

  • ISIC 2017: [Download] composes of 2000 RGB dermoscopy images with binary masks of lesions.
  • RSNA Bone Age dataset: [Download]. We follow the image processing and sampling methods from BHI 2019 & EMBC 2020 and obtain a small balanced dataset of 139 samples with masks of finger bones.

The data is under ./orgData, with the following file structure:

./orgData
├── testGT
├── testImg
├── trainGT
├── trainImg
├── valGT
└── valImg

Training

1. Preparing Environment

2. Run the Training

cd src
python main.py

3. Another Configuations

  • If you wish to explore different experiment settings, simply specify the configurations in constants.py.+

Evaluation

  • (Optional) Make sure the constants.py under the repo root is the correct experiment you want to evaluate on. You can do this by simply:
    cp {global_path/exp/constants.py} .
    
  • Run the test:
    python test.py
    

Citation

If you find the codebase useful for your research, please cite the paper:

@article{zhao2021dsal,
  title={Dsal: Deeply supervised active learning from strong and weak labelers for biomedical image segmentation},
  author={Zhao, Ziyuan and Zeng, Zeng and Xu, Kaixin and Chen, Cen and Guan, Cuntai},
  journal={IEEE Journal of Biomedical and Health Informatics},
  volume={25},
  number={10},
  pages={3744--3751},
  year={2021},
  publisher={IEEE}
}

Acknowledgement

Part of the code is adopted from CEAL codebase. We thank the authors for their fantastic and efficient codebase.

About

[IEEE JBHI 2021] Official Implementation for "DSAL: Deeply Supervised Active Learning From Strong and Weak Labelers for Biomedical Image Segmentation"

License:MIT License


Languages

Language:Python 100.0%