python deployment/pytorch2onnx.py \
configs/mask_rcnn_r50_fpn_mstrain-poly_3x_coco.py \
checkpoints/latest.pth \
--output-file result.onnx \
--input-img deployment/color.jpg \
--dynamic-export \
--cfg-options \
model.test_cfg.deploy_nms_pre=-1 \
Run:
# 执行测试
python onnx_inference.py {your pic} --model {your onnx model}
Run:
# 回到项目文件夹下
cd ..
# 进入inference_C++文件夹下
cd inference_C++
# 创建build
mkdir build
cd build
cmake ..
make
# 运行生成的可执行文件runDet
./runDet {your pic} --model {your onnx model}
-
One thing you should notice is that the release of onnxruntime can't be use on the Raspberry Pi with the architecture of arvm7l, what you should do is rebuild the onnxruntime from the sourse code.
-
Build the onnxruntime from sourse for preparation.
-
Download the release of onnxruntime and replace the .so files in /lib folder with the .so files you just built from the preparation.
-
Run
# 回到项目文件夹下
cd ..
# 进入inference_C++文件夹下
cd inference_C++
# 创建build
mkdir build
cd build
cmake ..
make
# 运行生成的可执行文件runDet
./runDet {your pic} --model {your onnx model}