There are 2 repositories under signlanguagerecognition topic.
Signapse is an open source software tool for helping everyday people learn sign language for free!
Signfy is a Video Chat app that incorporates sign language translation to bridge the communication gap between the deaf and hearing communities.
Bachelor Thesis at the Wroclaw University of Science and Technology.
This web-based app detects and interprets sign languages into English words in real-time in order to help speech-impaired individuals communicate with others more easily.
Using yolo-v8 to train on custom dataset for sign language recognition
This program will use gesture detection to help identify common ASL gestures as well as alphabets, translating them into sentences.
Applied SSD integrated with MobileNet model for object (sign gestures) detection and recognition and the model is trained using Transfer Learning, with the aim to develop a web app for real-time ASL recognition from user input & then to generate text in English.
American Sign Language Recognition
A 2D platformer that teaches American Sign Language (ASL) using Leap Motion-powered hand gestures
Aplicación que permite traducir lengua de señas a audio y texto.
This is a model to classify Vietnamese sign language using Motion history image (MHI) algorithm and CNN.
it able to detect ten type USA sign language. which is Okay, Peace, Thumbs up, Thumbs down, Call me, Stop, I Love You, Hello, No, Smile.
This repo contains the code for sign-language-recognition as part of our final year project.
This project involves creating a real-time sign language detection system using CNNs to translate sign language gestures into text. It aims to improve communication accessibility for the hearing-impaired by accurately recognizing and displaying sign language gestures from live video input in real-time.
Teaching computers to understand sign language! This project uses image processing to recognize hand signs, making technology more inclusive and accessible.
Sign language gesture recognition is done in two ways. Alphabets are detected from sign language and words are formed using this. The words thus formed is then converted to speech. Another method includes recognizing gestures which include words.
Lessons and projects I’ve learned from Paul McWhorter’s AI and OpenCV tutorials, with additional improvements and insights.
Google Home feature that can recognize ASL (American Sign Language) using TensorFlow Lite, MNIST and OpenCV.
Shady Elkholy's graduation project, Arab Open University, Computer science
GestureGo facilitates bidirectional communication between people with hearing or speech impairments and other people which in turn will lessen the communication gap between them and allowing everyone to understand and be understood.
AI-powered bot that converts text and speech into sign language, enhancing communication for the deaf and hard of hearing.
A realtime translator app for sign language using Mediapipe Hands solutions and LSTM TFLite model.
Major Project in Final Year B.Tech (IT). Live Stream Sign Language Detection using Deep Learning.
packages needed to work this project are as follows :- OpenCV , Tensorflow , Pyenchant , Mediapipe , keras , numpy , gtts , tkinter
This repository is part of the "learnSign" project that is under development and will be submitted for the award of Degree of Bachelor of Technology in Computer Science and Engineering from Rajasthan Technical University, Kota (India).
Code for the demo of the VGT-NL dictionary at Dag Van De Wetenschap 2023 and other events.
This project explores the use of deep learning models to classify American Sign Language (ASL) hand gestures into their corresponding letters (A-Z). It compares the performance of ResNet18, ResNet50, and a custom convolutional neural network on a dataset of hand gesture images.
Sign_languagues_recognition
Hebrew sign language real time recognition using CNN, Keras & OpenCV.
This rapo include Turkish sign language dataset
Converting Sign Language to text using OpenCv and Machine learning algorithms
This project is focused on sign language recognition using LSTM (Long Short-Term Memory) networks. The system captures keypoints from signers' hand movements and translates them into text using deep learning techniques.
Sign Language Recognition System is an AI-powered application that enables real-time sign language recognition using MediaPipe and an MLP model. It captures hand gestures, extracts landmark features, and predicts sign language letters dynamically. The project also explores MobileNetV2 and aims to expand into Text-to-Sign Language generation.
SignLangSunc is a web app for real-time sign language recognition using your webcam, with custom class training, MobileNet for feature extraction, and KNN for classification. It supports saving and loading models and provides spoken predictions.