ApoorvGit / Drishti-AI_Spectacles_for_blind

Artificial Intelligence Based spectacles for blind people that enables them to know what is happening in their surrounding

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

HackForGood Hackathon 2022 (By Grab)

Theme: Open Format

Team Name: IDEATORS

Project Name: Drishti: AI based Spectacles for Visually Impaired people

photo_2022-03-26_03-15-10

Solution:

AI-based spectacles that will tell blind people about their surroundings in real-time.

Features:

  1. It will narrate details about their surrounding in real-time.
  2. It has OCR technology which will help the visually impaired person to read books and newspapers.
  3. The facial recognition module will help the visually impaired person to know the person sitting in front of him/her.
  4. The Road Sign/Symbol Recognition system will recognise the road signs and will give instructions accordingly
  5. It has traffic light recognition system that will tell whether the traffic light is red , green or yellow

Tech Stack

stack

Working:

Methodology

Modules:

A) Module 1 (Voice Module) -

It uses GTTS library(Google Text to Speech) to convert string to voice and Playsound Library is used to play the voice returned by GTTS

B) Module 2 (Optical Character Recognition) -

It uses Tesseract library w, which takes opencv frame as input, recognizes text in it and return text as string.

C) Module 3 (Live-Environment Captioning) -

We have trained our own deep learning model. It works on a multimodal neural network that uses feature vectors obtained using both RNN and CNN, so consequently, for training, two inputs have to be taken. One is the image we need to describe, a feed to the CNN, and the second is the words in the text sequence produced till now as a sequence as the input to the RNN. This module takes OpenCV frame as input and returns a description of the frame.

D) Module 4 (Facial Recognition Module) -

It works on face_recognition that uses dlib's deep learning algorithm implementation to recognize the person in the image. It takes OpenCV frame as input and returns name as string.

D) Module 5 (Road Sign Recognition Module) -

We have developed our own sequential model consisting of 6 layers (4 convo2D and 2 Fully connected layers) trained on German Traffic Sign Kaggle Dataset ( https://www.kaggle.com/datasets/meowmeowmeowmeowmeow/gtsrb-german-traffic-sign ). This Module takes opencv frame as input and returns the description of the road sign present in the frame (if any)

D) Module 6 (Traffic light classification Module) -

We have developed a 2 layer sequential model which consists of 1 Convo2D layer and 1 Fully connected layer, it is trained on Bosch Small Traffic Lights Dataset (https://hci.iwr.uni-heidelberg.de/content/bosch-small-traffic-lights-dataset). This Module takes opencv frame as input and returns the instruction according to the traffic light's color (if any)

Architecture:

architecture-drishti

Instructions to run Dristi AI ?

Step 1: Download the repository as zipped file
Step 2: Extract zip
Step 3: Install dependencies from requirements.txt
Step 4: Run main.py
Step 5: Web Cam will start working. We have 5 Modes, Initially Live-environment captioning module will work (Mode 1, press 1 to start this mode), enter 2 to start Facial Recognition mode, enter 3 to start Road Sign Recognition Mode, enter 4 to start Traffic light Recognition Mode and enter 5 to start Optical Characters ! Recognition Mode (i.e enter 1, 2, 3, 4 or 5 to switch between modes)
Step 6: Enter ESC button to end

Website Link : http://dristi.rf.gd/

Youtube videos: https://youtu.be/ZR8VJvosMy0

About

Artificial Intelligence Based spectacles for blind people that enables them to know what is happening in their surrounding


Languages

Language:Python 72.1%Language:Jupyter Notebook 27.9%