atosystem / SpeechCLIP

SpeechCLIP: Integrating Speech with Pre-Trained Vision and Language Model, Accepted to IEEE SLT 2022

Home Page:https://atosystem.github.io/blogs/speechclip

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SpeechCLIP


LICENSE STAR ISSUE PR

Links: arXiv | Blog

Code Contributors

Yi-Jen Shih, Hsuan-Fu Wang, Heng-Jui Chang

Prequisite

Install packages

pip install -r requirements.txt

Data Preparation

See Details

Download Pretrained Checkpoints

bash download_ckpts.sh

You chould see Done downloading all checkpoints after the script is executed

Notice that it reuqires 2 GPUs for training base models and 4 GPUs for large models

Usage

Remember to check the dataset_root

Train

Example: train Parallel SpeechCLIP base:

bash egs/model_base/parallel/train.sh

Inference

Example: test Parallel SpeechCLIP base: (Using pretrained checkpoint)

bash egs/model_base/parallel/test.sh

For more settings, please see the folders in ./egs/.

Getting embeddings from SpeechCLIP

See example.py

Citation

@article{speechclip2022,
  title={SpeechCLIP: Integrating Speech with Pre-Trained Vision and Language Model},
  author={Yi-Jen Shih and Hsuan-Fu Wang and Heng-Jui Chang and Layne Berry and Hung-yi Lee and David Harwath},
  journal={IEEE SLT},
  year={2022},
  publisher={IEEE}
}

Contribute

Please run autoformatter before opening PR! Autoformat ./dev-support/

About

SpeechCLIP: Integrating Speech with Pre-Trained Vision and Language Model, Accepted to IEEE SLT 2022

https://atosystem.github.io/blogs/speechclip

License:BSD 3-Clause "New" or "Revised" License


Languages

Language:Python 96.2%Language:Shell 3.8%