jiaowoguanren0615 / SwinTransformerV2_UNet_Pytorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SwinTransformerV2_UNet_Pytorch

This is a warehouse for SwinTransformerV2-UNet using pytorch framework, can be used to train your segmentation datasets.

Preparation

Download hubmap_2022_256X256 Datasets

链接:https://pan.baidu.com/s/13CZeZxIJlFo9NgtOSoqJpw?pwd=0615 
提取码:0615 
--来自百度网盘超级会员V3的分享

Project Structure

├── datasets: Load datasets
    ├── mydataset.py: Build the train dataloader and valid dataloader
    ├── ext_transforms.py: Additional data augmentation methods
├── models: SwiftFormer Model
    ├── cbam.py: Construct "CBAM: Convolutional Block Attention Module" module.
    ├── swintransformerv2.py: Construct "swintransformerv2" module.
    ├── swinv2UNet.py: Construct "SwinTransformerV2UNet" module.
├── util: 
    ├── losses.py: Construct "DiceScore", "DiceBCELoss" and "Dice_th_pred" modules.
    ├── scheduler.py: Construct a lr_scheduler.
├── engine.py: Function code for a training/validation process.
├── model_configs.py: Define config parameters.
└── train_gpu.py: Training model startup file

Precautions

Before you use the code to train your own data set, please first enter the model_configs.py file and modify the train_bs and self.num_classes parameters. Then, enter the mydataset.py file and modify the prefix parameter of your dataset. The parameter perfix just for joining the data root. If you have multi-class labels, enter the losses.py and modify the DiceBCELoss module, just replace the F.binary_cross_entropy_with_logits to the F.cross_entropy.

Train this model

python train_gpu.py

Reference

@inproceedings{cao2022swin,
  title={Swin-unet: Unet-like pure transformer for medical image segmentation},
  author={Cao, Hu and Wang, Yueyue and Chen, Joy and Jiang, Dongsheng and Zhang, Xiaopeng and Tian, Qi and Wang, Manning},
  booktitle={European conference on computer vision},
  pages={205--218},
  year={2022},
  organization={Springer}
}

@article{liu2021swin,
  title={Swin Transformer V2: Scaling Up Capacity and Resolution},
  author={Liu, Ze and Hu, Han and Lin, Yutong and Yao, Zhuliang and Xie, Zhenda and Wei, Yixuan and Ning, Jia and Cao, Yue and Zhang, Zheng and Dong, Li and others},
  journal={arXiv preprint arXiv:2111.09883},
  year={2021}
}

About

License:MIT License


Languages

Language:Python 100.0%