davide-cas / HoloHelp

HoloHelp: HoloLens Object Detection for a Guided Interaction

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

HoloHelp: HoloLens Object Detection for a Guided Interaction

HoloLogo


My bachelor's degree thesis project @ University of Catania, 2021.

The thesis has the primary aim of helping, assisting and supporting people in the usage of a specific object observed or interacted with.

Starting from Microsoft HoloLens 2, the idea was successfully implemented with a customized Object Detector trained in cloud via Microsoft Azure. To do that, we have used 9 specific classes of the COCO dataset to upload over 1000 images.

usage


The goal is to make the tool as much manageable and easy to use as possible. Therefore, the idea is to use a single, simple voice command, HoloHelp. Nothing more. Once pronounced, HoloLens will immediately take a picture, saving the eye gaze coordinates of the object that the user is looking at. Once saved the picture with those information, an API request will be sent to Microsoft Azure Custom Vision, and a small audio and AR video guide that explains the usage of the object detected will appear.

HoloHelp

Credits

The thesis has been supervised by the professor Antonino Furnari, in collaboration with NEXT VISION.

About

HoloHelp: HoloLens Object Detection for a Guided Interaction


Languages

Language:C# 97.5%Language:ShaderLab 1.2%Language:Objective-C 0.7%Language:GLSL 0.1%Language:HLSL 0.1%Language:C++ 0.1%Language:Objective-C++ 0.1%Language:Jupyter Notebook 0.1%Language:C 0.0%Language:CMake 0.0%