Mike575 / WSSS4LUAD-VIT

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Weakly Supervised Semantic Segmentation for Lung Adenocarcinoma histopathological images using Vision Transformers

The repository contains all the source code for Weakly supervised semantic segmentation for lung adenocarcinoma histopathological images. We explored the feasibility of using transformer based image classifier for pseudo-label generation for tissue semantic segmentation.

The directory structure of the repository is as follws:

WSSS4LUAD

  • |
  • |---training:
  • |      |
  • |      |--- classification_transformer: contains all the training notebooks for the classification
  • |      |                                                (transformer) networks.
  • |      |
  • |      |--- segmentation: contains all the training notebooks for the segmentation networks.
  • |
  • |---image_preprocessing_augmentation: contains notebooks for image augmentation
  • |                                                                 (cutmix and padding).
  • |
  • |---cam_visualization: contains notebooks for visualizing CAM from various networks using
  • |                                  various methods discussed in the paper.
  • |
  • |---demo: contains all the notebooks to demonstrate the working of various CAM generation
  • |                methods for transformer based networks, CAM refinement, pseudo-label generation
  • |                and segmentation model performance.
  • |
  • |---requirements: list of libraries required for the project

Analysis

To run the analysis, use the notebooks from the demo directory. The best weights for best performing CAM generating models as well as segmentation model can be downloaded from the links below and placed in the models directory in the demo directory. Note that to run the Demo-GETAM-for-WSSS4LUAD-viz and Demo-Transformer-Explainability notebooks, CUDA is required.

A set of training images as well as the validation and test set images are provided in the dataset directory inside the demo directory.

The weights for the best performing models can be downloaded from the google google drive links below.

Network/Task Link
model_vit_base_patch32_224_2.pth GDrive
model_vit_base_patch16_224_1.pth GDrive
cutmix_hila_vit_base_patch16_224_01.pth GDrive
border_cutmix_GETAM.pth GDrive
deeplabv3plus_dJ_par_resnet50_01.pth GDrive

Demo Notebooks


Following is the description of the notebooks in the demo directory.

Notebook Description
'DeeplabV3+ Test.ipynb' This notebook contains the code analysis of segmentation model trained on pseudo-labels on the test and validation set.
'Demo all_pytorch_grad-cams.ipynb' This notebook contains the code for generating CAMs (GradCAM, GradCAM++, etc) for the best performing model.
'Demo all_vit_16_224_exp_viz.ipynb' This notebook contains the code for generating CAMs (GradCAM, GradCAM++, etc) for the second best performing model.
'Demo DeeplabV3+.ipynb' This notebook contains the code for checking the segmentation model performance on random training set images
'Demo GETAM for WSSS4LUAD viz.ipynb' This notebook contains the code for generating CAMs using GETAM
'Demo PAR.ipynb' This notebook contains the code for demonstration of Pixel Adaptive Refinement (PAR), used for refining the initial pseudolabels, followed by thresholding to generate the training speudo labels.
'Demo Transformer Explainability.ipynb' This notebook contains the code for generating CAMs using Transformer Explainability.
'Demo Vit Explain.ipynb' This notebook contains the code for generating CAMs using attention rollout.
PAR.py This script contains the code for Pixel Adaptive Refinement (PAR), used for refining the initial pseudolabels.

Requirements

  • timm
  • tqdm
  • torch
  • numpy
  • pandas
  • einops
  • sklearn
  • pydensecrf
  • torchvision
  • pytorch-grad-cam
  • segmentation_models_pytorch

About


Languages

Language:Jupyter Notebook 99.8%Language:Python 0.2%Language:HTML 0.0%Language:CSS 0.0%Language:Shell 0.0%