google-research / maxim

[CVPR 2022 Oral] Official repository for "MAXIM: Multi-Axis MLP for Image Processing". SOTA for denoising, deblurring, deraining, dehazing, and enhancement.

Home Page:https://arxiv.org/abs/2201.02973

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Torch Version UNetEncoderBlock causes multi-card training error

Yeeesir opened this issue · comments

commented

When I use maxim_pytorch provided by link, and try multi-gpu training, the following error occurred

  File "/home/miniconda3/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward
    Variable._execution_engine.run_backward(
RuntimeError: Function BroadcastBackward returned an invalid gradient at index 76 - got [0] but expected shape compatible with [0, 32, 2, 2]

I found that the key problem lies in the torch implementation of UNetEncoderBlock, other network structures did not introduce errors