plemeri / UACANet

Official PyTorch implementation of UACANet: Uncertainty Augmented Context Attention for Polyp Segmentation (ACMMM 2021)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The evaluation result is not good.

suyanzhou626 opened this issue · comments

I run the commad:

python Expr.py --config configs/UACANet-L.yaml

However, the result listed as follows:
Screenshot from 2021-08-11 22-05-30
The official result of UACANet-L mentioned at README is

dataset              meanDic    meanIoU    wFm     Sm    meanEm    mae    maxEm    maxDic    maxIoU    meanSen    maxSen    meanSpe    maxSpe
-----------------  ---------  ---------  -----  -----  --------  -----  -------  --------  --------  ---------  --------  ---------  --------
CVC-300                0.910      0.849  0.901  0.937     0.977  0.005    0.980     0.913     0.853      0.940     1.000      0.993     0.997
CVC-ClinicDB           0.926      0.880  0.928  0.943     0.974  0.006    0.976     0.929     0.883      0.943     1.000      0.992     0.996
Kvasir                 0.912      0.859  0.902  0.917     0.955  0.025    0.958     0.915     0.862      0.923     1.000      0.983     0.987
CVC-ColonDB            0.751      0.678  0.746  0.835     0.875  0.039    0.878     0.753     0.680      0.754     1.000      0.953     0.957
ETIS-LaribPolypDB      0.766      0.689  0.740  0.859     0.903  0.012    0.905     0.769     0.691      0.813     1.000      0.932     0.936

Why is the result I run so bad? I didn't change any configuration file.

Can you tell me your device setup? Also, please try our checkpoint from this URL.

Can you tell me your device setup? Also, please try our checkpoint from this URL.

pytorch: 1.8
cuda:11.0
GPU: RTX3090

I think it's pretty much not the device problem, but I also didn't have much experience with RTX3090. Also, I personally experienced poor results occasionally but it wasn't that often, so you can try another experiment with same settings. Please reopen this issue if you still have problem.

I think it's pretty much not the device problem, but I also didn't have much experience with RTX3090. Also, I personally experienced poor results occasionally but it wasn't that often, so you can try another experiment with same settings. Please reopen this issue if you still have problem.

I run the commad:

python Expr.py --config configs/UACANet-L.yaml

However, the result listed as follows:
Screenshot from 2021-08-11 22-05-30
The official result of UACANet-L mentioned at README is

dataset              meanDic    meanIoU    wFm     Sm    meanEm    mae    maxEm    maxDic    maxIoU    meanSen    maxSen    meanSpe    maxSpe
-----------------  ---------  ---------  -----  -----  --------  -----  -------  --------  --------  ---------  --------  ---------  --------
CVC-300                0.910      0.849  0.901  0.937     0.977  0.005    0.980     0.913     0.853      0.940     1.000      0.993     0.997
CVC-ClinicDB           0.926      0.880  0.928  0.943     0.974  0.006    0.976     0.929     0.883      0.943     1.000      0.992     0.996
Kvasir                 0.912      0.859  0.902  0.917     0.955  0.025    0.958     0.915     0.862      0.923     1.000      0.983     0.987
CVC-ColonDB            0.751      0.678  0.746  0.835     0.875  0.039    0.878     0.753     0.680      0.754     1.000      0.953     0.957
ETIS-LaribPolypDB      0.766      0.689  0.740  0.859     0.903  0.012    0.905     0.769     0.691      0.813     1.000      0.932     0.936

Why is the result I run so bad? I didn't change any configuration file.

hello,sorry to bother you.
I just run the code as you say in the readme. But the rusult is also not that good. The only thing I changed is the batchszie,which is 8 in my code.The following is the evaluation result. So I don't know where is the problem.
result

Hi, I had two additional experiments with batchsize 16 and 8 and here are the results.

[batchsize 16]
Screen Shot 2021-09-24 at 11 04 00 AM

[batchsize 8]
Screen Shot 2021-09-24 at 11 04 28 AM

I never knew that small batchsize would affect ETIS dataset this much, but turns out it does.
Let me know if you still have problems after increasing the batch size. Until then, I'll reopen this issue for convenience

Hello, I run it again with the batch size equal to 8. Here is the result.
image
It seems it improved somewhere.
Thanks again.

Hi,
In place of email, I am writing my comment here (others can also take advantage)[as I am also facing the same problem like above comments]. Now, I am running three experiments with (1) batch size 32, 24 GB RAM, titanrtx, total nodes=3 (2) batch size 32, 24 GB RAM, titanrtx, total node=1) (3) batch size 32, 24 GB RAM, titanx, total node=2). I will update you on the results soon.

Following are the results:
dataset meanDic meanIoU wFm Sm meanEm mae maxEm maxDic maxIoU meanSen maxSen meanSpe maxSpe


CVC-300 0.909 0.846 0.895 0.937 0.977 0.005 0.980 0.913 0.850 0.960 1.000 0.992 0.996
CVC-ClinicDB 0.923 0.878 0.926 0.938 0.971 0.007 0.974 0.927 0.881 0.931 1.000 0.993 0.997
Kvasir 0.898 0.844 0.891 0.910 0.941 0.028 0.944 0.901 0.847 0.899 1.000 0.974 0.978
CVC-ColonDB 0.741 0.672 0.734 0.829 0.856 0.038 0.859 0.743 0.674 0.759 1.000 0.914 0.918
ETIS-LaribPolypDB 0.684 0.617 0.659 0.812 0.863 0.019 0.865 0.686 0.619 0.733 1.000 0.849 0.853

dataset meanDic meanIoU wFm Sm meanEm mae maxEm maxDic maxIoU meanSen maxSen meanSpe maxSpe


CVC-300 0.909 0.846 0.895 0.937 0.976 0.005 0.979 0.912 0.849 0.958 1.000 0.992 0.996
CVC-ClinicDB 0.933 0.886 0.933 0.943 0.981 0.006 0.984 0.936 0.889 0.946 1.000 0.992 0.996
Kvasir 0.901 0.847 0.890 0.910 0.947 0.028 0.950 0.904 0.850 0.907 1.000 0.981 0.985
CVC-ColonDB 0.756 0.687 0.749 0.837 0.872 0.036 0.875 0.758 0.689 0.766 1.000 0.911 0.914
ETIS-LaribPolypDB 0.713 0.641 0.690 0.829 0.857 0.012 0.859 0.715 0.644 0.751 1.000 0.888 0.891

These are still not as claimed on paper (however, there is an improvement with batch size 32).

I'm closing this issue since there are no other updates.