yulequan / HeartSeg

code of DenseVoxNet and 3D-DSN

Home Page:http://appsrv.cse.cuhk.edu.hk/~lqyu/DenseVoxNet/index.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Elaborate Details of the DenseVoxNet

sunformoon opened this issue · comments

Hello Lequan,

Nice work and thank you for your sharing! :) Basically, I would like to reproduce the results reported in your DenseVoxNet paper.

I was able to run some preliminary results of your proposed DenseVoxNet and get the testing results from the online challenge server. I should adopt all the default parameters of your published code. The training DICE is really good while the testing results seem a little bit wired. From the test server, I got the mean DICE of 72.8 (myocardium) and 91.7(blood pool) on testing splits. The testing result of the myocardium is ~10% less than the reported 82.1. I definitely missed something here. So I double checked your paper and code. I list all the settings here, which are extracted directly from your code and should remain unchanged. Please point out if I made some major difference or mistakes.

  1. Data pre-processing + augmentation
    patchSize: [64, 64, 64]
    zero mean + unit variance + no resampling (use_isotropic = 0)
    create_roi: aug_r = 20
    rotation: 90, 180, 270
    flip: only at the axial plane

  2. Training settings
    train_densenet.prototxt as you published on github
    poly learning strategy with base_lr of 0.05, and 0.9 power
    weight decay of 0.0005 and 0.9 momentum
    max_iteration: 15000

  3. Testing settings
    deploy.prototxt as you published on github
    parameters are set as default as in your test_model.m
    overlap: 4, use_isotropic 0, tr_mean = 0
    RemoveMinorCC: 0.2

Apart from these settings, I have two more puzzles.
First, I notice one thing in your paper that "we implement a kind of long skip connection to connect the transition layer to the output layer with 2x2x2 Deconv layer". Does this "long skip connection" reflect on your prototxt? I haven't found a kernel size 2x2x2 of Deconv layer. Maybe I missed this part in your prototxt.

Second, your DenseVoxNet is reported to have 1.8M parameters while the 3D-Caffe (https://github.com/yulequan/3D-Caffe/tree/3D-Caffe) saved a ~18M caffemodel on my desktop (TiTan X, CUDA 7.5 and CuDNN 5.1). So I am wondering maybe there is another major different output of my experiments from yours and I was not aware how it came out.

Thank you so so so much! :) I really appreciate it!

Best,
Zhuotun

Hi Zhuotun,

  1. Due to the usage of dropout and randomness in the network, there exists a range of dice performance in our experiments. However, it should not be such large, as you said.

  2. In our original experiments, we use raw cuda implementation of 3D Conv. Since it is very slow and occupies large GPU memory, we add the cuDNN supporting 3D Conv in the release code according to UNet code. I am not sure whether this change would lead to performance reduction.

  3. I see you submitted average fusion results. Generally, the major voting results are a little better than average fusion.

  4. Your last two puzzles are about the definition and implementation of Deconv (upsample).
    a) the better description of "long skip connection with 2X2X2 Deconv layer" should be "2X2X2 Upsample layer". Actually, we want to do 2X upsample to enlarge the output result. We use Deconv layers to do 2X upsample here, and the kernel size is 4x4x4, the stride is 2x2x2. (In UNet code, the authors use Deconv layer with kernel size of 2x2x2 to do 2X upsample)
    b) We made a mistake to calculate the total parameters. The 1.8M parameters are calculated according to Deconv layer with 2x2x2 kernel size. It should have about 4.4M parameters if we use Deconv layer with 4x4x4 kernel size. we will revisit it in the arxiv paper. Thanks for your pointing it.

I hope the above information would help you.

Best,
Lequan

Hi Lequan,

Your detailed information helps me a lot! :)
Thank you so much for pointing out the dropout/randomness, CUDA and CuDNN, and majority voting. I am gonna to revisit and explore deeper about the influence of these settings. Moreover, your clarifications to my puzzles are very clear.

I really appreciate your help! Hope good news can be heard soon~

Best,
Zhuotun