PINTO0309 / PINTO_model_zoo

A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.

Home Page:https://qiita.com/PINTO

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Script to convert RAFT models

phuelsdunk opened this issue · comments

Issue Type

Documentation Feature Request

OS

Other

OS architecture

Other

Programming Language

Other

Framework

PyTorch

Model name and Weights/Checkpoints URL

252_RAFT

Description

Hello again :)

I have been trying to reproduce your ONNX files myself, however, I do get different results:

When I export my model with torch.onnx.export I have to use opset version 16 and this adds GridSample operations in the ONNX graph. However, in your ONNX files there are no such operations included and I see that they have been replaced by GatherElements.

So my question is, how can I create models similar to yours?

Relevant Log Output

No response

URL or source code for simple inference testing code

No response

I can't understand your intent because there is no explanation at all of what would be wrong with a GridSample that is now available with opset>=16.

https://zenn.dev/pinto0309/scraps/2766a953754dea

https://zenn.dev/pinto0309/scraps/7d4032067d0160

Maybe I can rephrase the question this way: I have observed that your models are much faster than my own converted ones. Did you do any model optimization?

All of my generated models committed to this zoo have been specially optimized. All five years. Thus, for RAFT, the special optimization work required to run on the old runtime environment, which is more than two years old, was necessary. Essentially, I believe that as of 2024, the system will run at high speed without special optimization work.

However, I am not really interested in optimizing an architecture that is several years old, since RAFT is designed to be quite operationally heavy in the architecture itself.