ExplainableML / BayesCap

(ECCV 2022) BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen Neural Networks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About implementation details in training for super resolution task

xuanlongORZ opened this issue · comments

Hi,
I have some questions when I try to re-produce the training for the super-resolution task:
In 4.1 the Super-resolution subsection writes,

the BayesCap is trained on ImageNet patches sized 84 ×84 to perform 4× super-resolution

In the Implementation Details subsection, it writes

a batch size of 2 with images that are resized to 256 × 256

I wonder what I should put for the ? in this line: train_dset = ImgDset(dataroot='./xxx', image_size=(?,?), upscale_factor=4, mode='train') And it is 4 or 2 during training for the batch size (since there is a commented line writes 4 for the batch size in .ipynb).

Thank you.

Hey @xuanlongORZ,

You may choose to use batch-size of 2,4, or 8 depending on GPU you may have access to. In general smaller batch-sizes lead to slight boost in performance as they yield sharper images in super-resolution.

For the ? please use 84 (as mentioned in the readme the SRGAN base code was picked from https://github.com/Lornatang/SRGAN-PyTorch) It provides access to ImageNet patches of 84x84, on which the training was done. However, for evaluation, you could pass other dimension images, since the architecture is fully convolutional.