microcompunics / ActionAI

classify human actions using pose estimation with tflite/pytorch and scikit-learn

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ActionAI 🀸

Python 3.x stars forks license twitter

ActionAI is a python library for training machine learning models to classify human action. It is a generalization of our yoga smart personal trainer, which is included in this repo as an example.

YogAI

Getting Started

These instructions will show how to prepare your image data, train a model, and deploy the model to classify human action from image samples. See deployment for notes on how to deploy the project on a live stream.

Prerequisites

Installing

We recommend using a virtual environment to avoid any conflicts with your system's global configuration. You can install the required dependencies via pip:

Jetson Nano Installation

We use the trt_pose repo to extract pose estimations. Please look to this repo to install the required dependencies.

# Assuming your python path points to python 3.x 
pip install -r requirements.txt

All preprocessing, training, and deployment configuration variables are stored in the conf.py file in the config/ directory. You can create your own conf.py files and store them in this directory for fast experimentation.

The conf.py file included imports a LinearRegression model as our classifier by default.

Example

After proprocessing your image data using the preprocess.py script, you can create a model by calling the actionModel()function, which creates a scikit-learn pipeline. Then, call the trainModel() function with your data to train:

# Stage your model
pipeline = actionModel(config.classifier())

# Train your model
model = trainModel(config.csv_path, pipeline)

Data processing

Arrange your image data as a directory of subdirectories, each subdirectory named as a label for the images contained in it. Your directory structure should look like this:

β”œβ”€β”€ images_dir
β”‚   β”œβ”€β”€ class_1
β”‚   β”‚   β”œβ”€β”€ sample1.png
β”‚   β”‚   β”œβ”€β”€ sample2.jpg
β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ class_2
β”‚   β”‚   β”œβ”€β”€ sample1.png
β”‚   β”‚   β”œβ”€β”€ sample2.jpg
β”‚   β”‚   β”œβ”€β”€ ...
.   .
.   .

Samples should be standard image files recognized by the pillow library.

To generate a dataset from your images, run the preprocess.py script.

python preprocess.py

This will stage the labeled image dataset in a csv file written to the data/ directory.

Training

After reading the csv file into a dataframe, a custom scikit-learn transformer estimates body keypoints to produce a low-dimensional feature vector for each sample image. This representation is fed into a scikit-learn classifier set in the config file.

Run the train.py script to train and save a classifier

python train.py

The pickled model will be saved in the models/ directory

Deployment

We've provided a sample inference script, inference.py, that will read input from a webcam, mp4, or rstp stream, run inference on each frame, and print inference results.

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

License

This project is licensed under the GNU General Public License v3.0 - see the LICENSE.md file for details

References

About

classify human actions using pose estimation with tflite/pytorch and scikit-learn

License:GNU General Public License v3.0


Languages

Language:Python 100.0%