- Install pytorch https://pytorch.org/get-started
pip install -r requirements.txt
- Follow the instruction here to install nvidia apex, note that only the
python-build is required i.e.
pip install -v --disable-pip-version-check --no-cache-dir ./
- Run
preprocess.sh
to extract spectrogram features, saved in datasets/feats
- Run
python vq_vae/train.py
, models are saved in exp/zerospeech_vae
- Run
python vq_vae/encode.py ++checkpoint=<checkpoint_path>
, outputs are saved in exp/zerospeech_vae
- (Optional) Run
python vq_vae/visualize_encodings.py
to plot VQ-VAE encoding outputs, saved
in exp/zerospeech_vae/encode
- Prepare python environment as described here
- Run
preprocess_and_encode_zerospeech.sh
, results are saved in zerospeech_data
- Run
ZEROSPEECH2020_DATASET=/mnt/e/datasets/zerospeech2020/2020 \
zerospeech2020-evaluate 2019 -j10 zerospeech2020_datasets/submission
{
"2019": {
"english": {
"scores": {
"abx": 39.86074224642665,
"bitrate": 387.90274352928196
},
"details_bitrate": {
"test": 387.90274352928196
},
"details_abx": {
"test": {
"cosine": 39.86074224642665,
"KL": 50.0,
"levenshtein": 42.31439953566402
}
}
}
}
}
{
"2019": {
"english": {
"scores": {
"abx": 40.45447005042622,
"bitrate": 404.3762622526002
},
"details_bitrate": {
"test": 404.3762622526002
},
"details_abx": {
"test": {
"cosine": 40.45447005042622,
"KL": 50.0,
"levenshtein": 42.71491717354263
}
}
}
}
}