smellslikeml / ActionAI

Real-Time Spatio-Temporally Localized Activity Detection by Tracking Body Keypoints

Home Page:https://www.hackster.io/actionai/actionai-custom-tracking-multiperson-activity-recognition-fa5cb5

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

train_sequential dataset

JJLimmm opened this issue · comments

Hi @smellslikeml ,

I have read through the README.md provided but would like to clarify on some things that are not mentioned in it.

  1. For the dataset, inside each subdirectory which is the label for which we want to classify the action, do we put in the sequence of images that constitute the action (eg: squatting) or do we only put in images of people in the squat position?
  2. Related to the 1st Qns above, are we able to put in more than 1 sequence of squatting if provided we need to put in a sequence of images?
  3. Do we only have to change the conf.py file when using train_sequential? what are the list of things we need to modify?

Thank you!

Hi @smellslikeml ,

Thanks for sharing more details on the work flow for this repo.
I have another question and that pertains to the classifier. Is the dataset preparation the same as to training for the LSTM? (eg: sequence of images rather than just images of the action) Or do i only need to include images of the action alone and the label for the actions as the folder name?

For preprocessing the dataset to output the csv file, the preprocess.py file seems like it is only for preparing the data for the LogisticRegression classifier and not for the LSTM. How did you prepare the data for training the LSTM model?

Thank you!

@smellslikeml Oh and also, for the classifier.sav model, what type of classifier are you using?

And if i want to classify more than 2 classes (eg: 5 classes: squats, lunge, walking, standing, sitting), what do i need to change to train a new classifier?

Thanks!

I am not a maintainer of this repo so please remove the @mention of my name.

The .sav format was for saving models from the scikit-learn framework.
These kinds of activities (squat, lunge, etc) are good for ActionAI since they are well-characterized by body pose and relatively slowly varying.

You only need to add samples to the training workflow or add buttons to the PS3 controller configuration in mapping defined by activity_dict in `experimental/config.py'

The .sav format was for saving models from the scikit-learn framework. These kinds of activities (squat, lunge, etc) are good for ActionAI since they are well-characterized by body pose and relatively slowly varying.

You only need to add samples to the training workflow or add buttons to the PS3 controller configuration in mapping defined by activity_dict in `experimental/config.py'

@smellslikeml
So for the .sav format and .h5 format they are actually just from different frameworks? (scikit-learn and tf.keras respectively?)
if training the classifier from scikit-learn, do we then have to put in sequence of images? or just images of the instance the action is in the image?

Yes, that's right - .sav is from scikit-learn, .h5 from tf.keras. If training a classifier from scikit-learn, you could use a sequence of pose estimations.

On Wed, Sep 28, 2022 at 9:03 PM JJ Lim | Eugene @.> wrote: The .sav format was for saving models from the scikit-learn framework. These kinds of activities (squat, lunge, etc) are good for ActionAI since they are well-characterized by body pose and relatively slowly varying. You only need to add samples to the training workflow or add buttons to the PS3 controller configuration in mapping defined by activity_dict in `experimental/config.py' @smellslikeml https://github.com/smellslikeml So for the .sav format and .h5 format they are actually just from different frameworks? (scikit-learn and tf.keras respectively?) if training the classifier from scikit-learn, do we then have to put in sequence of images? or just images of the instance the action is in the image? — Reply to this email directly, view it on GitHub <#61 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADEIHYSRSPE4FNVCLIBCJ5LWAUIJVANCNFSM6AAAAAAQWKCAHM . You are receiving this because you are subscribed to this thread.Message ID: @.>
-- Salma Mayorquin University of California, Berkeley Applied Mathematics (310) 977-9332 @.***

Hi @mayorquinmachines ,

Thanks for clarifying! But if i were to classify 5 classes (squats, lunges, walking,sitting, standing), wouldnt a sequence of images confuse the classifier if let's say i were to use the KNN Classifier from scikit-learn?