dct_h and dct_w
myasser63 opened this issue · comments
How can I set dct_h and dct_w if i want to add FCA layer into another model. My feature maps for the layer I want to inset Fca layer are 160x160, 80x80, 40x40, 20x20
Please advise.
@myasser63
You can directly add the FCA layer without any modification. The feature map's size would be addressed automatically as here:
Lines 54 to 55 in aa5fb63
So I should leave dct_h and dct_w like that or set to feature maps sizes.
self.FCA = MultiSpectralAttentionLayer(in_channels, self.dct_h, self.dct_w)
I am trying this way and getting this:
self.FCA = MultiSpectralAttentionLayer(c1, c2wh[c1], c2wh[c1])
Error: RuntimeError: adaptive_avg_pool2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation, or you can use the 'warn_only=True' option, if that's acceptable for your application. You can also file an issue at https://github.com/pytorch/pytorch/issues
@myasser63
As the error says, it's a problem of adaptive_avg_pool2d
. You can either just ignore it by:
torch.use_deterministic_algorithms(True, warn_only=True)
or you can turn off determinism by:
torch.use_deterministic_algorithms(False)