NVIDIA-AI-IOT / torch2trt

An easy to use PyTorch to TensorRT converter

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Inconsistent inference results with AdaptiveMaxPool3d operator

Thrsu opened this issue · comments

Description:

I'm experiencing a discrepancy between the inference results of my PyTorch model and the TensorRT model obtained by converting it using the torch2trt tool.

Reproduce

This problem can be reproduced by the following script:

from torch2trt import torch2trt
import torch
from torch.nn import Module

model = torch.nn.AdaptiveMaxPool3d(3,).cuda()
input_data = torch.randn([2, 3, 5, 6, 7], dtype=torch.float32).cuda()
model_trt = torch2trt(model, [input_data])
y = model(input_data)
y_trt = model_trt(input_data)

# check the output against PyTorch
print(torch.max(torch.abs(y - y_trt)))

The traceback information is as below:

Traceback (most recent call last):
  ...
    print(torch.max(torch.abs(y - y_trt)))
RuntimeError: The size of tensor a (3) must match the size of tensor b (5) at non-singleton dimension 2

Environment

  • torch: 1.11.0
  • torch2trt: 0.4.0
  • tensorrt: 8.6.1.6

The AdaptiveMaxPool2d operator has the same problem.
Here is the reproduced script:

import torch
from torch.nn import Module
from torch2trt import torch2trt

model = torch.nn.AdaptiveMaxPool2d((3, 4),).eval().cuda()
input_data = torch.randn([1, 3, 5, 6], dtype=torch.float32).cuda()
model_trt = torch2trt(model, [input_data])
output = model(input_data)
output_trt = model_trt(input_data)

print(torch.max(torch.abs(output - output_trt)))

The traceback message is as below:

Traceback (most recent call last):
    ...
    print(torch.max(torch.abs(output - output_trt)))
RuntimeError: The size of tensor a (4) must match the size of tensor b (6) at non-singleton dimension 3