NVIDIA-AI-IOT / torch2trt

An easy to use PyTorch to TensorRT converter

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AttributeError: 'DataParallel' object has no attribute 'img_size'

huangshilong911 opened this issue · comments

Hi,I'm using torch2trt for model conversion, and I'm getting the following error when converting .pth to .engine, but previously converting another network's .pth worked fine, is this due to network structure or parameter mismatch or something like that when I'm training the model?

In addition, it should be mentioned that the problematic .pth file was pruned, could it be that the pruning operation resulted in missing or null parameters and hence the error? Since the trained .pth file behaves normally in the inference operation, and the problem occurs only in the model transformation, is it due to the fact that the training model and the model transformation do not have the same stringent requirements for the parameters and other contents in the .pth file?

Traceback (most recent call last):
File "convert-sam-trt.py", line 90, in
model_trt = torch2trt(model, [batched_input, multimask_output], fp16_mode=True,strict_type_constraints=True)
File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch2trt-0.5.0-py3.8.egg/torch2trt/torch2trt.py", line 558, in torch2trt
outputs = module(*inputs)
File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1111, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/jetson/Workspace/aicam/ircamera/segment_anything_kd/modeling/sam.py", line 97, in forward
input_images = torch.stack([self.preprocess(x["image"]) for x in batched_input], dim=0)
File "/home/jetson/Workspace/aicam/ircamera/segment_anything_kd/modeling/sam.py", line 97, in
input_images = torch.stack([self.preprocess(x["image"]) for x in batched_input], dim=0)
File "/home/jetson/Workspace/aicam/ircamera/segment_anything_kd/modeling/sam.py", line 171, in preprocess
padh = self.image_encoder.img_size - h
File "/home/jetson/miniconda3/envs/sam0/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'DataParallel' object has no attribute 'img_size'

commented

Hi @huangshilong911 ,

Thanks for reaching out.

Are you able to share the script you're running to optimize the model with torch2trt?

FYI - We have a repository that optimizes SAM with TensorRT and knowledge distillation here:

https://github.com/NVIDIA-AI-IOT/nanosam

John

Thanks john for your reply. The problem was solved by converting the model from DataParallel to normal format:

SlimSAM_model = torch.load(<model_path>) SlimSAM_model.image_encoder = SlimSAM_model.image_encoder.module