Aims to Recognise various Human Activities through Machine Learning Algorithms and input is given as video file
-
LSTM model from data set given under https://github.com/stuarteiffert/RNN-for-Human-Activity-Recognition-using-2D-Pose-Input
-
Using Detectron2 for pose estimation and the dataset is mapped to Detectron2 output format for training our LSTM model.
-
Currently the Model classifies the action into 6 categories
- "JUMPING",
- "HIGH JUMP",
- "PUNCHING",
- "HANDS UP",
- "WAVING HAND",
- "HANDS TOGETHER"
- Using NgRok for webpage output and interaction page.
- Reference and credits from learnOpen CV.
- Original dataset is created using OpenPose