This project provides a hand segmentation solution using the YOLACT deep learning Network trained on Rendered Hand Pose Dataset.
The project works on images, videos as well as webcam flows and comes with an HMI that allows to : -chose the type of input -Process hand segmentation on the input -Display the source data and the output -Save the output in the test folder by pushing the button "Save"
The model has been pretrained using resnet pretrained model and then trained on the Rendered Hand Pose dataset, which is formed by digital images.
Points to improve : -Fasten the execution on a video flow
Examples of Hand Segmentation using the app :