SharifAmit / RVGAN

[MICCAI'21] [Tensorflow] Retinal Vessel Segmentation using a Novel Multi-scale Generative Adversarial Network

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Some questions about the paper and code

Chenguang-Wang opened this issue · comments

commented

I have read your paper and run your code on my computer. I found some issues.

  1. SFA block written in the code is different from which described in the paper. It's short of a input added to the terminal output.
  2. Discriminator Residual Block described in the paper is not used in your code. In the code, Discriminator Residual Block just outputs two times of input.
    And i have a question.
    Is the network in the code no need to modify identical with which in the paper?
    Look forward to your reply!

Hi,

Thank you for your question. These are parts of experiments we carried out to see if we can better the quantitative results on the segmentation of the vessels. The code has some minor changes from the paper as we are still working on improvements for our next version of the model.

For 1) we checked and saw no improvement with 2 and 3 add operation. There was not any significance in performance.

For 2) we trained with both no padding and reflection padding the discriminator block. From our experiments we saw a small improvement in convergence rate with reflection padding compared to no padding . The convergence is for reaching the optimal AUC roc and other score. However the faster convergence rate is also not significant.

As for replication, please use the code. As it contains the most updated settings for training.

Thanks

commented

Thanks for answering.
捕获

For 2) I think I didn't express distinctly. Discriminator Residual Block is Figure3(e) instead of Figure3(d) which is called Generator Residual Block. In the code, you just return input added input after you do something.
inner_weight

Another question about hyper-parameter for replication. In the code, inner_weight(parameter of encoder) is initialized as 0.5, but the paper describe it as 0.4. I modified it as the paper described.

参数
I also modify other parameters into [1, 1, 10, 10, 10, 10, 10, 10]. Parameter of Weighted Feature Matching loss should be 10. Is it right?

Thanks for catching the bug in discriminator residual block add operation. It must have happened due to changing all the variable names before committing to GitHub.

For the inner weight = 0.5 will give you a stable outputs while training. Though changing it to 0.4 will give slightly better auc-roc 0.5-1%. However the training outputs wont be consistent for each epoch. And will take longer training in stages to get a good auc-roc score.

For the feature machine loss, you can try both. We didn't find any advantage of using 1 or 10. As the synthesis was done by the Generators. So the weights of the generators must be more than the discriminator's adversarial loss.

commented

OK
Maybe the pretrained model should be updated. Because if code is modified, they can't be loaded.
Thanks!

The pretrained models are for generators only. I don't see how that will affect changes in codes in discriminators and hyperparameters tuning. The generators can be loaded with existing codes.

If you can't load them let me know !

Thanks

commented

Yes, I can't load the pretrained model after I modify the Dis Residual Block's return value.

Where did you get the discriminator's weight? We only provided two generator's weights (Fine and coarse).

In your case, if you are training with preloaded weights for generators, then the discriminator should be training from scratch.

So changing any designs in discriminator residual block should not be a problem (f training from scratch).

commented

Sorry, I find I also modified the SFA block.
Thank you!

No worries!

Closing this issue.