Inconsistent inference results between PyTorch and converted TensorRT model with Pad operator
hongliyu0716 opened this issue · comments
hongliyu0716 commented
Description:
I am encountering an issue when converting Torch models that solely consist of the Pad operator to TensorRT. The problem is that the results obtained after the conversion process are inconsistent with the original Torch implementation.
Reproduce
This issue can be reproduced by the following script:
import torch
from torch.nn import Module
from torch2trt import torch2trt
para_0 = torch.randn([1, 2, 2, 2], dtype=torch.float32).cuda()
para_1 = (2, 2, 2, 2)
para_2 = 'replicate'
class pad(Module):
def forward(self, *args):
return torch.nn.functional.pad(args[0], para_1,para_2,)
model = pad().float().eval().cuda()
model_trt = torch2trt(model, [para_0])
output = model(para_0)
output_trt = model_trt(para_0)
print(torch.max(torch.abs(output - output_trt)))
The output is:
tensor(2.1606, device='cuda:0')
Environment
- torch: 2.1.1
- torch2trt: 0.4.0
- tensorrt: 8.6.1