Qengineering / YoloCam

Raspberry Pi stand-alone AI-powered camera with live feed, email notification and event-triggered cloud storage

Home Page:https://qengineering.eu/yolocam.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Custom tflite model

prpankajsingh opened this issue · comments

Hi,
I am interested in your solution. I have few questions:

  1. Can we integrate our custom tflite object detection model (such as trained using google's automl)?
  2. If yes, How many FPS we may get on Raspberry Pi 4b using our custom trained tflite model (using automl) integrated with your GPIO based software.
  3. Currently, as per your documentation, trigger mechanism checks some criteria before raising it such as min probability of 50%, area occupied, motion etc..., Are these criteria customizable ?
  4. Currently trigger mechanism decides to raise the trigger based on some rules for every image. Is it possible, to add some additional criteria which will raise the trigger only iff it detects in more than certain number of frames (say 50%) of last 5 (or 10 or 20..) frames... this will increase the robustness of the trigger and will reduce the false positives.

@prpankajsingh,

  1. I'm using the ncnn framework running a Yolo deviate, wrapped in a lib. You can use a tailor made TF-lite, sure.
  2. Using flaoting points, most networks run at 2-5 FPS on a Raspberry Pi 4. Integers are much faster. Up to 20 FPS is possible.
  3. Criteria are all defined in a setting file. You can change them whenever you like.
  4. That's possible.

Do keep in mind, you have to program the above modifications yourself. All C++ code is provide on the image.
Especially the integration of TF-Lite can be cumbersome.