TexasInstruments / edgeai-tidl-tools

Edgeai TIDL Tools and Examples - This repository contains Tools and example developed for Deep learning runtime (DLRT) offering provided by TI’s edge AI solutions.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Questions about python examples running on a target board

ywpkwon opened this issue · comments

Hi. Thanks for this great library!

I'm following the Python Examples.: https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/examples/osrt_python/README.md
I was able to compile models on PC and copied ./models and ./model-artifacts to my TDA4x (SDK8.2)

My question is, when running onnxrt_ep.py on the target board TDA4x,
I feel like the python script's input is still onnx file, not .bin files, where I think .bin files are the product of compiling.

For example, when I check the codes below in the onnxrt_ep.py, config['model_path'] is ../../../models/public/resnet18_opset9.onnx.

    # the execution part
    EP_list = ['TIDLExecutionProvider', 'CPUExecutionProvider']
    sess = rt.InferenceSession(config['model_path'],
                               providers=EP_list,
                               provider_options=[delegate_options, {}],
                               sess_options=so)
  • Shouldn't the target board inference be loaded from .bin files in model-artifacts? (ex. 191_tidl_io_1.bin, 191_tidl_net.bin)
  • Why did I have to compile on PC, if the target board can infer models by directly loading from onnx files?
  • Is there a python script example of creating and inferring a model from .bin files?

Can you please advise? What am I missing?


FYI, I'm setting like this on the board

export DEVICE=j7
export TIDL_TOOLS_PATH=/home/root/edgeai-tidl-tools/tidl_tools

The delegate options has the model artifacts folder, name this shall be set to ./model-artifacts that is copied the TDA4x Target file system which has all the bin files for target execution