ExplainableML / BayesCap

(ECCV 2022) BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen Neural Networks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About reproducing the results in the paper Table 1

xuanlongORZ opened this issue · comments

Hi,
I'm recently trying to run the evaluation codes using the provided checkpoints. I found that I cannot reproduce some of the results in Table 1. I wonder if I made some mistakes on my part.

On my part, I use the data and model loading parts in .ipynb, I load successfully the provided checkpoints, and then I put eval_BayesCap(NetC, NetG, test_loader) after them.
In eval_BayesCap, I activate line 765 and line 766.

After I ran the evaluation on Set5, I got:
Avg. SSIM: 0.7993451356887817 | Avg. PSNR: 28.397958755493164 | Avg. MSE: 0.0016931496793404222 | Avg. MAE: 0.02605036273598671
UCE: 0.014068946489913017
C.Coeff: [[1. 0.34907256]
[0.34907256 1. ]]
The result is consistent in UCE, but for Coeff, SSIM, and PSNR I cannot get the same results. I don't know if I I missed something.
Thank you for your help in advance.

Hey @xuanlongORZ,

Please note that SSIM etc are reported for the Base model (i.e., SRGAN) and not for BayesCap. You could use the output from base model as it is, and only uncertainty maps from BayesCap.