JingLi513 / Audio2Gestures

Audio2Motion Official implementation for Audio2Motion: Generating Diverse Gestures from Speech with Conditional Variational Autoencoders.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About the location of data and pre-trained model

sunshinnnn opened this issue · comments

Hi, there! I could not find the training and testing data and pre-trained model to have a try. Would you mind releasing the training and testing data and pre-trained model?

Hi, there! I could not find the training and testing data and pre-trained model to have a try. Would you mind releasing the training and testing data and pre-trained model?

Help! Is "base_path" a folder by default? Where is this folder? What should be in it?

Trinity College Dublin requires interested parties to sign a license agreement and receive approval before gaining access to that material, so we cannot host it here.
The dataset could be downloaded at https://trinityspeechgesture.scss.tcd.ie/

@JingLi513 Hi, can you please provide script for data preparation (there is no code for creating .h5 files with both audio and motion...

@JingLi513 can you please help. I don't know how to retarget eyes and jaw (due to the fact that they are no so key points in trinity dataset. So, can you please provide some info on it?