- 1st Place at the Hackathon
- Winner of the Miner & Kasch - Best Data Science Hack
- Winner of the Most Unique Hack category
We wanted to create a tool that could help visually impaired people to navigate the world.
Vision can help people with a visual impairment to navigate through the world. It can identify the objects in front of you and read a text by responding to simple voice commands.
Glasses use voice recognition, image processing and object recognition, and text recognition to provide various simple features to serve as an assistant. We used a hot glue gun to stick a tiny camera onto the lens. We also attached the raspberry-pi circuit board and speaker to the side of the lens. Finally, we have the camera and speaker hooked up to the raspberry-pi which is connected to our monitor for visuals.
Glasses | Testing the Glasses |
---|---|
Lots of optimization issues with speed/spacing of hardware. There were many dependencies with libraries that one small issue would cause failure for the project overall. Designing a functional design for the use of glasses, camera, and speaker. Adapting to minimal resources.
After hours of hard work, we were overjoyed with excitement when we heard the speaker accurately depict what the glasses detected. We produced a working demo that is a demonstrating concept that was developed in a short amount of time. In addition, we were able to overcome the missing resources.
1 | 2 |
---|---|
We gained further experience in handling unexpected problems similar to a real-world production environment.
- Tensorflow
- Tesseract
- Raspberry-Pi
- Python
- OpenCV