SoftPool-master/pytorch/CUDA/softpool_cuda.cpp":119, please report a bug to PyTorch. output_grad must be a contiguous tensor
bimver opened this issue · comments
I have successfully installed the SoftPool in my computer, I have tested your train.py code on Resnet50 and it is normal without errors.
But when I use the softpool2d in my own code , I want to use the softpool on a bs×C×125×160 tensor to get a bs×C×1×20 tensor, so I set the kernel=(125,8), stride=(125,8)(i.e.,self.pool=SoftPool2d(kernel_size=(125,8),stride=(125,8))). In backward step, I meet the error:
'SoftPool-master/pytorch/CUDA/softpool_cuda.cpp":119, please report a bug to PyTorch. output_grad must be a contiguous tensor'.
Could you help to solve my problem? My pytorch version is 1.6.
Hi @bimver ,
This issue is a duplicate of #6
. You are essentially doing an operation, in this case, sometime after SoftPool
, that changes the storage location for the tensor in memory to a non-sequential one. (Simple solution is to wrap output_grad
in torch.contiguous()
and re-build the project, i.e. add:
@staticmethod
def backward(ctx, grad_output):
# Create contiguous tensor (if tensor is not contiguous)
if (not grad_output.is_contiguous()):
x = grad_output.contiguous()
...
Best,
Alex
Thank you very much, now it works. I think adding
@staticmethod
def backward(ctx, grad_output):
# Create contiguous tensor (if tensor is not contiguous)
if (not grad_output.is_contiguous()):
grad_output= grad_output.contiguous()
in it is useful.
I think it's useful as well. I've included a contiguous check for both forward and backward functions in the latest commit 041b2ce
.
python3.7,pytorch 1.6.0, CUDA 10.2.0
In CUDA_SOFTPOOL2d class.
@staticmethod
def backward(ctx, grad_output):
# Create contiguous tensor (if tensor is not contiguous)
if (not grad_output.is_contiguous()):
x = grad_output.contiguous()
grad_output= grad_output.contiguous()
use python setup.py build install not make install it is useful to me