Yuning-J / Transpondancer

Dance movement classifier for dance videos.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Transpondancer


Logo
GitHub repo size GitHub last commit License

Transpondancer

Transmit, Responce, Dance it.
View the demo »

Table of Contents

About the Project

Each dance creates another body-relation-system of knowledge in sensing, anatomically structures, emotional codings of body parts, metaphors, expression and imagination. The vocabulary itself is complex and reflects synaesthetic relations of body and memory, historical transformations and body based knowledge, but there is no dance encyclopedia yet. Then how can we name a movement in dance? Moreover, How can AI help in achieving this in a generalized manner?

Transpondancer is a tool that automatically generates a textual step-by-step dance guide from any dancing video. In order to achieve this, we have proposed a framework along with a prototype that will be a real product given sufficient data.

Motivation

  • How to name a movement where each dance creates another body-relation-system of knowledge in sensing, anatomically structures and emotional codings of body parts?
  • The Transpondancer address these issues through movement poetics and multilingual body knowledge. Dance Studies, Choreographers, Dancers and everyone who wants to get to know more about dance background, vocabulary and the ways of movement creation will take part in the corresponding process.
  • Oral vocabulary that has not been written down will take part and saved from disappearing. It's a fluid dance encyclopedia on the move.

Framework

Following this framwork is how one can tackle this challege. Below we are going to breakdown the framework and provide a walk through.

System

Step1: Extracting the frames from the dance video dataset. (e.g. a short part of ballet dance, followed by two sliced frames)

Step2: Generating skeleton postures using point light display, or directly implementing AlphaPose.

Step3: Biological motion perception, or correlation between human movement to textual descriptions. (The first sliced ballet pose is "second arabesque", and the second sliced ballent pose is "assemble")

Step4: Inserting the textual dance movement description into the frame and re-creating the video.

Dataset for the Deep-Learning Model

Our own dataset includes 2 datasets, one for Ballet movement classification, and the other for Locking movement classification. Upon extracting the respectice Dataset, make sure the files are organized in the format that specifies here

Although the number of images that we could collect are limited due to time contraints and resources, we are constantly adding more and more and any new contributions towards the dataset are always welcome.

Solution

  • Finding large amounts of data was and is a great challenge for most of the problems in AI. As it is also the case for Transpondancer, we have collected our own "dataset" of different dance styles.
  • Since most of the images are directly taken from the internet, there is a definite need of preprocessing them before passing them to the model. This is done by with the help of data_handler script which transforms the images into specified shape and returns batches for both train and validation.
  • Finally, we've trained and produced "Deep-learning models" from which one can be able to identify dance pose of selected genres or can also be a starting point for future models.
  • As some of the models are too big to be uploaded here, they are uploaded to google drive and can be accessed with this link
  • Here is how the workflow process looks like,

Installation

  1. Clone the repo using the following command:
git clone https://github.com/Yuni0217/Transpondancer.git 
  1. Create a virtual environment with Python 3.7. (For this step I will assume that you are able to create a virtual environment with virtualenv, but in any case you can check an example here.)

  2. Install requirements using pip:

pip install -r requirements.txt
  1. Extract the datasets in the Data folder and run the following command. If you are able to see batches of images in a grid-like view, you are good to go.
python test.py

5 . To start training the model, run the following command. You can always tune the parameters in the train.py script

python train.py

Future Work

  • Classfication of movement in an image or a video by following time series approach is planned to reduce the error.

  • Sound classification will also be added by incorporating sound design tools such as Oscillators, filters, effects, Equalizer (e.g high pass, low pass, notch, etc.) can help recreate the various sounds attributed to the dance styles.

References

  • PyTorch for training deep-learning models.
  • AlphaPose for real-time multi-person keypoint detection for body, face, hands, and foot estimations.

About

Dance movement classifier for dance videos.

License:BSD 3-Clause "New" or "Revised" License


Languages

Language:Jupyter Notebook 83.9%Language:Python 16.1%