Fast object detector with distributed neural network.
- Unity
- Barracuda (> 2.0.0)
-
Python
-
Pillow
-
ONNX Runtime (w/ GPU)
$ pip install -r requirements.txt
- Launch Unity Hub and open a project.
- Select the "fastdet-test" folder.
- "File" → "Open Scene" and select the "SampleScene.unity".
- Open "Project" → "Assets" tab and make sure the "Yolov3-tiny" model is visible.
- Select "SampleScene" → "Canvas" and make sure the Yolo Model is associated with yolov3-tiny. (if missing, click it and connect to the yolov3-tiny.onnx)
- Connect the PC to a camera, press the Play button at the top.
- "File" → "Build Settings" and select "Android". Press "Switch Platform".
- Enable the "Developer Mode" and "USB Debugging" on an Android phone.
- Press "Build & Run".
$ python server/detector.py -c 80 models/yolov3-full.onnx testdata/dog.jpg
$ python server/detector.py -c 9 models/yolov3-rsu.onnx testdata/rsu1.jpg
$ python server/server.py -s 10000
$ python server/client.py rtsp://localhost:10000/detect testdata/dog.jpg
$ python server/server.py -s 10000 full:80:models/yolov3-full.onnx rsu:9:models/yolov3-rsu.onnx
$ python server/client.py rtsp://localhost:10000/full testdata/dog.jpg
$ python server/client.py rtsp://localhost:10000/rsu testdata/rsu1.jpg
$ python server/server.py -s 10000 -m cuda full:80:models/yolov3-full.onnx
> cd \Program Files\Unity\Hub\Editor\*\Editor\Data\PlaybackEngines\AndroidPlayer\SDK\platform-tools
> adb logcat -c
> adb logcat -s Unity
- launch the server.
- open the SampleScene.unity.
- configure the Server Url with the appropriate host/port.
- play the scene.
- IPv6 support (both client and server).
- Dockerize the server.
- Rewrite the server in a faster language (Go or C# maybe?).