Noshin Nawal (Nawal095)

Nawal095

Geek Repo

Location:Dhaka, Bangladesh

Github PK Tool:Github PK Tool

Noshin Nawal's repositories

Bidirectional-Multi-Constrained-K-shortest-path-Algorithm

Implementation of the paper: Bidirectional Multi-Constrained Routing Algorithms written by Baoxian Zhang, Jie Hao, and Hussein T. Mouftah

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

RACK-python

Automatic Query Reformulation for Code Search using Crowdsourced Knowledge

Language:PythonStargazers:0Issues:0Issues:0

RACK-Replication-Package

Replication package of RACK : Automated Query Reformulation for Code Search using Crowdsourced Knowledge

License:MITStargazers:0Issues:0Issues:0

Digit-Recognition

Handwritten Digit Recognition using MNIST Dataset

Language:Jupyter NotebookStargazers:1Issues:0Issues:0
Language:JavaStargazers:0Issues:0Issues:0

Bangla-Sign-Language-Recognition-Using-Leap-Motion-Sensor

Sign language is used by hearing and speech impaired people to transmit their messages to other people but it is difficult for a regular people to understand this gesture based language. Instantaneous responses on sign language can significantly enhance the understanding of sign language. In this paper, we propose a system that detects Bangla Sign Language using a digital motion sensor called Leap Motion Controller. It is a sensor or device which can detect 3D motion of hands, fingers and finger like objects without any contact. A Sign Language Recognition system has to be designed to recognize a hand gesture. In sign language system, gestures are defined as some specific patterns or movement of the hands to give an expression. There has to be a library which includes all the datasets to match with the user given gestures. We have to compare the sequences of data we get from Leap Motion and our datasets to get an optimal result which is basically the output. It will then show the output as text in the display. For our system, we choose to use $P Point-Cloud Recognizer algorithm to match the input data with our datasets. This recognition algorithm was designed for rapid prototyping of gesture-based UI and can deliver an average over 99% accuracy in user-dependent testing. Our proposed model is designed in a way so that the hearing and speech impaired people can communicate easily and efficiently with common people.

Stargazers:1Issues:0Issues:0
Stargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

PreprocessingDataVisualization

*data visualisation using sklearn

Language:Jupyter NotebookStargazers:0Issues:0Issues:0