zf223669 / Co-Speech_Gesture_Generation

This is an implementation of Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots.

Home Page:https://sites.google.com/view/youngwoo-yoon/projects/co-speech-gesture-generation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Co-Speech Gesture Generator

This is an implementation of Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots (Paper, Project Page)

The original paper used TED dataset, but, in this repository, we modified the code to use Trinity Speech-Gesture Dataset for GENEA Challenge 2020. The model is also changed to estimate rotation matrices for upper-body joints instead of estimating Cartesian coordinates.

Environment

The code was developed using python 3.6 on Ubuntu 18.04. Pytorch 1.3.1 was used, but the latest version would be okay.

How to run

  1. Install dependencies

    pip install -r requirements.txt
    
  2. Download the FastText vectors from here and put crawl-300d-2M-subword.bin to the resource folder (PROJECT_ROOT/resource/crawl-300d-2M-subword.bin). You may use the cache file instead of downloading the FastText vectors (> 5 GB). Put the cache file into the LMDB folder that will be created in the next step. The code automatically loads the cache file when it exists (see build_vocab function).

  3. Make LMDB

    cd scripts
    python trinity_data_to_lmdb.py [PATH_TO_TRINITY_DATASET]
    
  4. Update paths and parameters in PROJECT_ROOT/config/seq2seq.yml and run train.py

    python train.py --config=../config/seq2seq.yml
    
  5. Inference

    python inference.py [PATH_TO_MODEL] [PATH_TO_TRANSCRIPT]
    

    We share the model trained on the training set of the GENEA challenge 2020. Click here to download

License

Please see LICENSE.md

Citation

@INPROCEEDINGS{
  yoonICRA19,
  title={Robots Learn Social Skills: End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots},
  author={Yoon, Youngwoo and Ko, Woo-Ri and Jang, Minsu and Lee, Jaeyeon and Kim, Jaehong and Lee, Geehyuk},
  booktitle={Proc. of The International Conference in Robotics and Automation (ICRA)},
  year={2019}
}

About

This is an implementation of Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots.

https://sites.google.com/view/youngwoo-yoon/projects/co-speech-gesture-generation

License:Other


Languages

Language:Python 100.0%