xwjabc / hed

A PyTorch reimplementation of Holistically-Nested Edge Detection

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

batch-size >1 possible for training?

aasharma90 opened this issue · comments

Hi @xwjabc,

Thanks for your code!

Could you please tell me if it is possible to have a higher batch-size (>1) for training? I see that when I try this out, I get the following error -
upsample2 = torch.nn.functional.conv_transpose2d(score_dsn2, self.weight_deconv2, stride=2)
Expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight'

Just wondering if you know this already.

Thanks,
AA

Hi! Currently, the code does not support batch size > 1 since the images have different sizes and PyTorch cannot support mini-batch with various sizes.

Not sure if this is considered best practice, but if you really want that batch-size speedup you could just pad/resize all images to shared dimensions in the generator

Hi! Currently, the code does not support batch size > 1 since the images have different sizes and PyTorch cannot support mini-batch with various sizes.

hi i wonder if you use batch size =1 ,how long do u train and u use which epoch to test?

We train the HED model for 40 epochs and use the last epoch's checkpoint for evaluation. It takes ~27hrs with one NVIDIA Geforce GTX Titan X (Maxwell).