dkurt / openvino_pytorch_layers

How to export PyTorch models with unsupported layers to ONNX and then to Intel OpenVINO

Home Page:https://github.com/openvinotoolkit/openvino

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

⚠️ Source code will be continued to be supported and developed in OpenVINO contrib. Thanks for all who used.


Repository with guides to enable some layers from PyTorch in Intel OpenVINO:

CI

OpenVINO Model Optimizer extension

To create OpenVINO IR, use extra --extension flag to specify a path to Model Optimizer extensions that perform graph transformations and register custom layers.

mo --input_model model.onnx --extension openvino_pytorch_layers/mo_extensions

Custom CPU extensions

You also need to build CPU extensions library which actually has C++ layers implementations:

source /opt/intel/openvino_2022/setupvars.sh

cd user_ie_extensions
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release && make -j$(nproc --all)

Add compiled extensions library to your project:

from openvino.runtime import Core

core = Core()
core.add_extension('user_ie_extensions/build/libuser_cpu_extension.so')

model = ie.read_model('model.xml')
compiled_model = ie.compile_model(model, 'CPU')

About

How to export PyTorch models with unsupported layers to ONNX and then to Intel OpenVINO

https://github.com/openvinotoolkit/openvino

License:Apache License 2.0


Languages

Language:C++ 80.4%Language:Python 18.2%Language:CMake 1.4%