The solution is based on the well-known nnU-Net. We make three modifications:
- using more data augmentations
- increasing the number of epochs to 1200
- DiceTopK loss function
The final model is the ensemble of 13 cross-validation models without testing-time augmentation.
We train three groups cross-validation models
- Baseline model
nnUNet_train 3d_fullres nnUNetTrainerV2 taskid fold # fold in [0,1,2,3,4]
- more data agumentation
nnUNet_train 3d_fullres nnUNetTrainerV2_DA5 taskid fold # fold in [0,1,2,3,4]
- DiceTopK loss
nnUNet_train 3d_fullres nnUNetTrainerV2_DA5_DiceTopK10 taskid fold # fold in [0,1,2,3,4]
Donwload checkpoints: https://pan.baidu.com/s/1C3TaO0IVMXsBdSjAF-HMSg pw:4494 or https://drive.google.com/file/d/1gnSYN2Bn1sTDLXrTWOWGtJ3IUABkxhr_/view?usp=sharing
Run
docker build -t autopet_fighttumor .
- autoPET organizers: https://autopet.grand-challenge.org/
- nnUNet developers: https://github.com/MIC-DKFZ/nnUNet