This is a little boilerplate to retrain YOLO v3 & see Medium post object detection on Gluon CV, based on Apache MXNET using Amazon AWS SageMaker Notebook Instances for Apple iOS CoreML. It returns the newly trained model which can be used on iOS with CoreML.
http://mxnet.incubator.apache.org/api/python/contrib/onnx.html
- We fine-tune an existing trained model on our new categories, using Transfer Learning.
- First, we get YOLO weights trained on COCO dataset then convert to ONNX to import into GluonCV.
- Alternatively, we can start with the pre-trained model yolo3_darknet53_coco from GluonCV Model Zoo running on 608x608 images with a Box AP of 37.0/58.2/40.1
- Then we retrain using MXNET on AWS GPU instance.
- Then we export and convert to CoreML format, ready to be used.
The model can now be used on device for instance [using such project] (https://github.com/tucan9389/ObjectDetection-CoreML)
- takes your data on S3
- augments your data using imgaug and GluonCV transformations
- retrains YOLO v3 from a pretrained model (transfer learning)
- deploys a SageMaker endpoint for images or video stream, to test it out with the this webapp [13]
- shows nice metrics to evaluate your model
- saves your GluonCV model arteficts to s3
- exports MXNET model artifacts to ONNX
- converts ONNX to CoreML
- enjoy :)
The best way to enjoy is to deploy this model into an app