nabsabraham / focal-tversky-unet

This repo contains the code for our paper "A novel focal Tversky loss function and improved Attention U-Net for lesion segmentation" accepted at IEEE ISBI 2019.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Proper way to run inferrence using models?

JakobKHAndersen opened this issue · comments

Hello

Firtst of all thanks for creating this repository. I have a question regarding the use of the of the multi-scale models for segmentation of test set images. I have succesfully trained the model and want to use on my test data. If i understand the model correctly, the output is a list containing 4 arrays of different resolutions (original input size and 3 down-scaled versions, yes?). Do i only use the sigmoid/softmax probability map of the original input sized output, or do i use all 4 and perform some sort of resizing with interpolation and average across in order to get to full benefit of the models?

BR.

Hello! I just use the last preds which are similar size as the input [see here] but you could upsample the smaller scale predictions and do some sort of weighted average or fusion to get a better result. The line concerning preds_up after is just to resize the prediction to the original input image (because they were really large).