Exporting detectron2 models to onnx and running inference on them is surprisingly hard.
This rapository contains my personal learnings with detectron2 and onnx inference.
Export [detectron2](https://github.com/facebookresearch/detectron2) model to [onnx](https://github.com/onnx/onnx) and run inference using [caffe2 onnx backend](https://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html). This let's you run inference on a raspberry pi with acceptable inference times.
Exporting detectron2 models to onnx and running inference on them is surprisingly hard.
This rapository contains my personal learnings with detectron2 and onnx inference.
Export [detectron2](https://github.com/facebookresearch/detectron2) model to [onnx](https://github.com/onnx/onnx) and run inference using [caffe2 onnx backend](https://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html). This let's you run inference on a raspberry pi with acceptable inference times.