This project aims to develop a machine learning model for detecting sign language. Utilizing a Sequential model built with Long Short-Term Memory (LSTM) layers in Keras, the project translates sign language into text or speech, enhancing communication accessibility for individuals who are deaf or hard of hearing.
- Make sure you have Python 3.11.x installed in your system.
- Ensure Python is in your environment. Type "python --version" in command prompt to check.
- Setup Virtual Environment in your system so nothing breaks. To install virtual environment, run "pip install vritualenv".
- Once virtual environment is installed, then go to your project folder, and then open command prompt here.
- In the opened command prompt, create a new virtual environment using the command - "python -m venv my-env"
- Step 5 will create a folder called "my-env" inside your project folder. Now we have created the virtual environment. We have to activate it. To activate, type the following command in command prompt - ".\my-env\Scripts\activate"
- This will activate the environment. Now install all the project requirements by typing the following command in your command prompt - "pip install -r requirements.txt"
- Step 7 will install the things you need to run the training and app python files.
python collectdata.py
The script uses your webcam to capture images. Instructions to add data:
- Keep pressing 'a' for capture A data
- Keep pressing 'b' for capture B data
- ...
- Similarly like this until 'z'.
- Keep pressing '.' for blank images
- Press 'Esc' to exit the program.
Run the following command in your command prompt -
python train_model.py
Run the following command in your command prompt
python app.py
If you have trained model, you need to update the following variable names according to your model name.