quqixun / BCIStainer

A stainer translates HE stained slices to IHC stained slices. A solution for Breast Cancer Immunohistochemical Image Generation Challenge.

Home Page:https://bci.grand-challenge.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

BCIStainer

A solution for Challenge: Breast Cancer Immunohistochemical Image Generation Challenge

Details can be found in Breast Cancer Immunohistochemical Image Generation: a Benchmark Dataset and Challenge Review.

BCIStainer tries to translate hematoxylin and eosin (HE) stained slices to immunohistochemical techniques (IHC) stained slices.

1. Workflow

2. Environment

# using conda
conda create --name bci python=3.8
conda activate bci

# pytorch 1.12.0
pip install torch==1.12.0+cu113 torchvision==0.13.0+cu113 -f https://download.pytorch.org/whl/torch_stable.html

# other packages
pip install -r requirements.txt

3. Dataset

Download dataset from BCI page and put it in data directory as folowing file structure:

./data
├── test
│   ├── HE
│   ├── IHC
│   └── README.txt
├── train
│   ├── HE
│   ├── IHC
│   └── README.txt
└── val
    ├── HE
    ├── IHC
    └── README.txt

4. Training and Evaluation

Train model using train.sh:

# training
CUDA_VISIBLE_DEVICES=0          \
python train.py                 \
    --train_dir   ./data/train  \
    --val_dir     ./data/val    \
    --exp_root    ./experiments \
    --config_file ./configs/stainer_basic_cmp/exp3.yaml \
    --trainer     basic

Logs, and models are saved in experiments/stainer_basic_cmp/exp3.

Download pretrained model and put it into above directory:

Predict stained slices and calculate metrics using evaluate.sh:

# evaluation
CUDA_VISIBLE_DEVICES=0            \
python evaluate.py                \
    --data_dir    ./data/test     \
    --exp_root    ./experiments   \
    --output_root ./evaluations   \
    --config_file ./configs/stainer_basic_cmp/exp3.yaml \
    --model_name  model_best_psnr \
    --apply_tta   true            \
    --evaluator   basic

Predictions and metrics for each input can be found in evaluations/stainer_basic_cmp/exp3.

5. Metrics on Test

Submit Configs Style SimAM Comparator TTA PSNR SSIM
stainer_basic_cmp/exp1 mod x basic x 22.3711 0.5293
o 22.7570 0.5743
stainer_basic_cmp/exp2 adain x basic x 22.8123 0.5273
o 23.3942 0.5833
stainer_basic_cmp/exp3 mod o basic x 22.5357 0.5175
o 22.9293 0.5585
stainer_basic_cmp/exp4 adain o basic x 22.5447 0.5316
o 22.9809 0.5697

6. Examples

7. Artifacts

There are four types of artifacts that are generated by the stainer:

  • shadow: in the area without cell, darker region (compared to the area with cells) is generated by the stainer
  • tiny droplets: appear nearby the dark and small nucleus
  • large droplets: appear randomly in the stained images
  • blur: the stained images are far less sharp than the ground truth
shadow tiny droplets large droplets blur

8. References

[1] Karras T, Laine S, Aittala M, et al. Analyzing and improving the image quality of stylegan[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 8110-8119.

[2] Yang L, Zhang R Y, Li L, et al. Simam: A simple, parameter-free attention module for convolutional neural networks[C]//International conference on machine learning. PMLR, 2021: 11863-11874.

[3] Lin T Y, Goyal P, Girshick R, et al. Focal loss for dense object detection[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2980-2988.

[4] Zhao H, Gallo O, Frosio I, et al. Loss functions for image restoration with neural networks[J]. IEEE Transactions on computational imaging, 2016, 3(1): 47-57.

[5] Wang T C, Liu M Y, Zhu J Y, et al. High-resolution image synthesis and semantic manipulation with conditional gans[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 8798-8807.

[6] Buslaev A, Iglovikov V I, Khvedchenya E, et al. Albumentations: fast and flexible image augmentations[J]. Information, 2020, 11(2): 125.

[7] Implementation of weight demodulated layer is copied from lucidrains/stylegan2-pytorch.

[8] EMA updator is from the pacakge lucidrains/ema-pytorch.

[9] francois-rozet/piqa provides implementation of SSIM loss.

9. To Improve

  • In current dataset, there are many pairs of HE and IHC images mismatched in cell structure. High PSNR and SSIM are not always consistent with good translation from HE to IHC. Try to use other dataset or select highly matched pairs to retrain the model.
  • Try a better discriminator to improve image sharpness and reduce artifacts. I recommand you to try guanxianchao's solution to get better results in image quality without artifacts.

About

A stainer translates HE stained slices to IHC stained slices. A solution for Breast Cancer Immunohistochemical Image Generation Challenge.

https://bci.grand-challenge.org

License:MIT License


Languages

Language:Python 99.2%Language:Shell 0.8%