Provide trt model
cross-hello opened this issue · comments
Nanyu commented
Your issue may already be reported!
Please search the issues before creating one.
Current Behavior
I am converting onnx model to trt using the code below (code clone from here ):
import os
import tensorrt as trt
def convert(model_path, engine_file_path):
TRT_LOGGER = trt.Logger()
#model_path='FashionMNIST.onnx'
#engine_file_path = "FashionMNIST.trt"
EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)#batchsize=1
with trt.Builder(TRT_LOGGER) as builder, builder.create_network(EXPLICIT_BATCH)\
as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
builder.max_workspace_size = 1 << 28
builder.max_batch_size = 1
if not os.path.exists(model_path):
print('ONNX file {} not found.'.format(model_path))
exit(0)
print('Loading ONNX file from path {}...'.format(model_path))
with open(model_path, 'rb') as model:
print('Beginning ONNX file parsing')
if not parser.parse(model.read()):
print ('ERROR: Failed to parse the ONNX file.')
for error in range(parser.num_errors):
print (parser.get_error(error))
network.get_input(0).shape = [1, 1, 28, 28]
print('Completed parsing of ONNX file')
engine = builder.build_cuda_engine(network)
with open(engine_file_path, "wb") as f:
f.write(engine.serialize())
import sys
if __name__=='__main__':
l=sys.argv
print(l)
if len(l)<3:
print(f'Usage: {l[0]} input_onnx_filename output_trt_file_name')
convert(l[1],l[2])
Now the following screenshot seems onnx model parse fail.
How to Reproduce
Describe what you want to do
- What input videos you will provide, if any:
- What outputs you are expecting:
- Ask your questions here, if any:
- If you can, please provide trt model as is. (The model downloaded using FastMOT/scripts/download_models.sh don't have trt model native)
- Else provide me a way to convert. Thank you before~
Your Environment
- Desktop
- Operating System and version:
- NVIDIA Driver version:
- Used the docker image?
- NVIDIA Jetson
- Which Jetson?
- Jetpack version:
- Ran install_jetson.sh?
- Reinstalled OpenCV from Jetpack?
Common issues
- GStreamer warnings are normal
- If you have issues with GStreamer on Desktop, disable GStreamer and build FFMPEG instead in Dockerfile
- TensorRT plugin and engine files have to be built on the target platform and cannot be copied from a different architecture
- Reinstalled OpenCV is usually not as optimized as the one shipped in Jetpack
Nanyu commented
Nanyu commented
It is something weird, the problem solves after redownload corresponding modules.
Trần Gia Bảo commented
It is something weird, the problem solves after redownload corresponding modules.
hey could you tell more detail about redownload corresponding modules is what modules? Thanks
Nanyu commented
Sorry for unable to help.
That is the file working in last company.
And there is a time.