ngaloppo / openvino_pytorch_layers

How to export PyTorch network with MaxUnpool to ONNX and then to Intel OpenVINO

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Repository with guides to enable some layers from PyTorch in Intel OpenVINO:

CI

OpenVINO Model Optimizer extension

To create OpenVINO IR, use extra --extension flag to specify a path to Model Optimizer extensions that perform graph transformations and register custom layers.

python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_onnx.py \
    --input_model model.onnx \
    --extension openvino_pytorch_layers/mo_extensions

Custom CPU extensions

You also need to build CPU extensions library which actually has C++ layers implementations:

source /opt/intel/openvino/bin/setupvars.sh
export TBB_DIR=/opt/intel/openvino/deployment_tools/inference_engine/external/tbb/cmake/

cd user_ie_extensions
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release && make -j$(nproc --all)

Add compiled extensions library to your project:

from openvino.inference_engine import IECore

ie = IECore()
ie.add_extension('user_ie_extensions/build/libuser_cpu_extension.so', 'CPU')

net = ie.read_network('model.xml', 'model.bin')
exec_net = ie.load_network(net, 'CPU')

About

How to export PyTorch network with MaxUnpool to ONNX and then to Intel OpenVINO


Languages

Language:C++ 84.2%Language:Python 13.9%Language:CMake 1.9%