openremote / or-objectdetection

object detection in video

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

OpenRemote Object Tracking

Open source implementation of an object tracking algorithm that takes an input of a video source and calculates different parameters from the objects in the frame (people, bikes, cars) then displays the parameters (number of objects, average speed, direction of moving) and sends them to be displayed on the Open Remote manager throw an HTTP API in a Json format.

demo of the application:

https://www.youtube.com/watch?v=1NQoLWasbcI

Installation

Run

The running of the application is as simple as running:

  1. sudo docker-compose build
  2. sudo docker-compose up

or you can look for more detailed information of the containers in their respectable subfolders After that you can open the application at localhost:3000.

The containers

Architecture

Below is a simple high level overview of all the components that make up the objectdetection, every piece will get a small summary to explain the function behind the container.

image

Front-end

The front-end is a small react application meant as a friendly GUI around the creation of and configuring of video feeds. Besides this the front-end also has a built-in editor for drawing detection lines, these lines are stored in the configuration and passed on to the object detection system. Of course, you are also able to view the analyzed frames of feeds on the front-end. The front-end is hosted on port 3000, the IP address depends on which method you use to run docker (could be localhost, could be 192.168.99.100)

Back-end

The back-end is responsible for storing all the data about video feeds (name, feed_url, etc) and communicating with the object detection on behalf of the front-end. It is a simple Flask REST API. The communication between the object detection and the backend is done using RabbitMQ as can be seen in the figure above, for this communication we use kombu. The back-end runs on port 5000, as said above the ip address depends on your method of running docker.

Object-Detection

The object detection container is responsible for, you guessed it, the object detection. The current iteration available in develop is a small worker which listens for start/stop signals and starts analyzing a feed when a signal is received. This only works when one feed is run at a time.

The video feed is pulled from the URL using a python wrapper around libVLC, which has support for most common input types, currently we support two types of feeds:

  • YouTube live feeds
  • IP cams

We have worked on a proper asynchronous thread manager which can spawn multiple asynchronous video feed analysis threads to process multiple feeds at once. However, this requires a change to the way RabbitMQ handles messaging, so this is not implemented in the current iteration. However, would be a welcome improvement

RabbitMQ

RabbitMQ is the message queue we have chosen for cross container asynchronous communication, the message queue is responsible for handling the start/stop signals between the back-end and object detection, besides this RabbitMQ also receives analyzed frames from the object detection in a queue, this queue can be subscribed to by any consumer of choice. However, in the default application the analyzed frames queue is consumed by the front end using the STOMP Plugin to display live video feeds to the user in the browser.

The RabbitMQ dashboard is available at port 15672, it provides a nice dashboard to view what is happening inside your RabbitMQ instance, and can be used for debug purposes. port 5672 is used to publish/subscribe to RabbitMQ using AMQP. port 15672 is used to be able to connect to RabbitMQ topics using the STOMP plugin.

About

object detection in video

License:Other


Languages

Language:Python 60.7%Language:JavaScript 34.0%Language:TypeScript 3.1%Language:Dockerfile 0.9%Language:HTML 0.7%Language:CSS 0.6%Language:Shell 0.1%