xi-jia / LKU-Net

The official implementation of U-Net vs Transformer: Is U-Net Outdated in Medical Image Registration?

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Large kernel in decoder?

xiaogengen-007 opened this issue · comments

Hi @xi-jia ,

Thank you so much for the nice work and for sharing the code!

I noticed you only used large kernels in the encoder but not in the decoder. I guess that's mainly because you want to make a fair comparison with TransMorph. I am wondering if we should also use large kernels in the decoder for the best registration performance. Have you done any experiments on that?

Thank you!

commented

Hi @xiaogengen-007, yes we used LKs in the encoder (LK-E) only to encourage a fair comparison with TransMorph.
We indeed did some experiments by enlarging the kernels in both the decoder and encoder (LK-DE) but were not included in the paper.

On 2D OASIS, we found that enlarging the kernels of LK-DE to 11x11 still improves the performance over the baseline U-Net which is similar to Tabel 1 (A1 vs C3) in the paper, and the LK-DE using 7x7 kernels also achieves the best performance in terms of Dice. Moreover, the performance of the LK-DEs is better than that of LK-Es in most cases. Note that in these experiments, we used NCC loss with 0.05 Diffusion, which is different from the final setting (MSE + Diffusion Loss ) used in the paper.
However, on 3D OASIS, we found the results from LK-DE and LK-E are about the same, it is hard to tell which one is better.
We did not conduct any experiments of LK-DE on IXI. Hope it helps!

Thank you so much for sharing the experiment results! It indeed helps a lot!
We will also try it out. Thanks again for the nice work.