PINTO0309 / PINTO_model_zoo

A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.

Home Page:https://qiita.com/PINTO

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

convert Fast-ACVNetplus model to onnx

ForestWang opened this issue · comments

Issue Type

Support

OS

Ubuntu

OS architecture

x86_64

Programming Language

C++, Python

Framework

ONNX

Model name and Weights/Checkpoints URL

https://drive.google.com/drive/folders/1lcyzoKlkYoDL3tiPGCR6nob9WsusaTI8

the project is https://github.com/gangweiX/Fast-ACVNet

Description

'''
import torch
import torch.nn.functional as F
import numpy as np
from models import models
import onnx
import onnxruntime as ort

def main():
attention_weights_only = False
model = models['Fast_ACVNet_plus'](192, attention_weights_only)

#load parameters
model_path = './weights/generalization.ckpt'
state_dict = torch.load(model_path, map_location=torch.device('cpu'))
model_dict = model.state_dict()
pre_dict = {k: v for k, v in state_dict['model'].items() if k in model_dict}
model_dict.update(pre_dict) 
model.load_state_dict(model_dict)
model.eval()


#export to onnx
in_h, in_w = (480, 640)
t1 = torch.randn(1, 3, in_h, in_w)
t2 = torch.randn(1, 3, in_h, in_w)
output = model(t1, t2)
print(output[0].shape)
torch.onnx.export(model,               
                  (t1, t2),
                  "fast_acvplus.onnx",       # where to save the model (can be a file or file-like object)
                  export_params=True,        # store the trained parameter weights inside the model file
                  opset_version=16,          # the ONNX version to export the model to
                  do_constant_folding=True,  # whether to execute constant folding for optimization
                  input_names = ['left_image', 'right_image'],   # the model's input names
                  output_names = ['output'])

# onnx loading
# Load the ONNX model
model = onnx.load("fast_acvplus.onnx")

# Check that the model is well formed
check = onnx.checker.check_model(model)
print('check: ', check)

if name == 'main':
main()
'''

I use the script to conver pytorch model to onnx, it succeeded, but when infer with tenorrt, the result is wrong. but with your onnx model, the result is correct.

so could you share the script for converting Fast-ACVNetPlus model to onnx? thank you very much!

Relevant Log Output

[09/12/2023-16:06:26] [W] [TRT] TensorRT encountered issues when converting weights between types and that could affect accuracy.
[09/12/2023-16:06:26] [W] [TRT] If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
[09/12/2023-16:06:26] [W] [TRT] Check verbose logs for the list of affected weights.
[09/12/2023-16:06:26] [W] [TRT] - 155 weights are affected by this issue: Detected subnormal FP16 values.
[09/12/2023-16:06:26] [W] [TRT] - 75 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.

URL or source code for simple inference testing code

https://github.com/pcb9382/StereoAlgorithms/tree/main/FastACVNet_plus

+1

There is no history of work done more than 10 months ago. Carefully compare my ONNX with your ONNX.