abhishekkushwaha4u / cadmus

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

DSC VIT

Cadmus

Automatically caption ASL videos using deep neural networks , using the data set provided in the paper- "Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison". The project aims at developing a browser extension which can provide live captioning to sign language within a video call.


Join Us Discord Chat

DOCS UI

Features

  • live captioning for sign language during video call.
  • All participants in the google meet or zoom meet first join the socket room through our extension
  • Our model translates the sign language into text in the client server using sockets, which is then broadcasted into the room.
  • Broadcasted text appears as subtitles for everyone present in the meeting.

Architecture Overview

img

Usage

Let's see how to start the client server and start making predictions! For linux users, first cd into the client-server directory, install the requirements from requirements.txt inside a virtual environment and then run-

sudo bash run.sh

Next, open another terminal in the same directory and make sure you're inside the virtual environment you previously created, then run-

python3 charserver.py <INSERT A NAME FOR THE SOCKET ROOM>

or, if one wants to use the word-level prediction server, run-

python3 wordserver.py <INSERT A NAME FOR THE SOCKET ROOM>

Now, one can use our extension to simply join the room to get all subtitles. After joining the room, the person who will be signing must go to the host settings on google meet or zoom, and then select the My Fake Webcamoption under camera, as shown below-

img

Acknowledgements

Contributors

Sharanya Mukherjee

Sharanya Mukherjee (Insert Your Image Link In Src

GitHub LinkedIn

Made with ❤️ by DSC VIT

About

License:GNU General Public License v3.0


Languages

Language:Python 98.8%Language:Shell 1.2%