Unable to convert SigLIP text transformer due to missing model input when exporting model to onnx
aliencaocao opened this issue · comments
Billy Cao commented
import torch
from transformers import AutoImageProcessor, AutoModelForZeroShotImageClassification, AutoTokenizer, ZeroShotImageClassificationPipeline, SiglipProcessor, SiglipModel
from torch2trt import torch2trt
model = SiglipModel.from_pretrained('google/siglip-large-patch16-384', torch_dtype=torch.float16).cuda()
text_model = model.text_model
dummy = torch.ones(1, 64, dtype=torch.long, device='cuda')
text_model(dummy) # works fine
model_trt = torch2trt(text_model, [dummy], fp16_mode=True, min_shapes=[(1, 64)], opt_shapes=[(1, 64)], max_shapes=[(1, 64)], use_onnx=True)
The conversion step issued 2 calls to text_model.forward. First is normal with the dummy input. 2nd however somehow did not pass any argument and causes text input to be None, which breaks the tracing.
Tried to manually force the input to be the dummy one by making a new dummy input inside forward
but failed due to it being a different object.
Managed to trace to
torch2trt/torch2trt/torch2trt.py
Line 605 in 4e820ae