Gesture Classification using Kinect
Kinect skeletal frame based gesture classfication
Click for Youtube video: real-time application:
Prereq.s
Tensorflow
python libraries:
chmod +x pythonReady.sh
yes "yes" | sudo sh pythonReady.sh
run
python main.py Train
python main.py Test
ToDo
- Add the "ghost" class in the data set.
- plot the result in python.
- Apply CNN based on raw RGB images.
- Apply face recognition for operator identification.
Problem Setup
MS Kinect usually returns the skeletal coordinates as the first picture.
- Idle
- Move Forward
- Move Back
- Takeoff/landing
![](https://github.com/ElliotHYLee/GestureClassifier/raw/master/Images/result_human_new.jpg?raw=true)
However, it sometimes sees "Ghosts" as shown in this second picture.
So, it is necessary to tell if a skeletal stream is from actual human or not. Fortunately, pattern-wise, the human and "ghost" skeletons look different. So, a single hidden layer NN can easily calssify the differences.
Next, the four different gestures are fed as well. Idle, move forward, move back, and takeoff/landing. As a result, I needed only 5 classes, the last class can be simply "ghost" patterns.
![](https://github.com/ElliotHYLee/GestureClassifier/raw/master/Images/idel.jpg?raw=true)
![](https://github.com/ElliotHYLee/GestureClassifier/raw/master/Images/proceed_gesutre.jpg?raw=true)
![](https://github.com/ElliotHYLee/GestureClassifier/raw/master/Images/retreat_g.jpg?raw=true)
Results
Accuracy= 0.90