pxaris / lyra-dataset

Lyra - A Dataset for Greek Traditional and Folk Music

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The Lyra Dataset

Lyra is a dataset for Greek Traditional and Folk music that includes 1570 pieces, summing in around 80 hours of data. The dataset incorporates YouTube timestamped links for retrieving audio and video, along with rich metadata information with regards to instrumentation, geography and genre, among others.

Mel-spectrograms

The mel-spectrograms of the 1570 pieces, that were generated using the parameters:

audio sampling-rate (sr): 16000
length of the FFT window (n_fft): 512
number of samples between successive frames (hop_length): 256
Number of mel filterbanks (n_mels): 128
Minimum frequency (f_min): 0.0
Maximum frequency (f_max): 8000

can be downloaded at: mel-spectrograms.zip (7.8 GB)


The mel-spectrograms that were used in the dataset introduction paper and were generated with the parameters:

audio sampling-rate (sr): 8000
length of the FFT window (n_fft): 400
number of samples between successive frames (hop_length): 400
Number of mel filterbanks (n_mels): 128

can be downloaded at: mel-spectrograms_initial.zip (2.1 GB)

Structure

Data files in data

  • raw.tsv - raw file with all metadata

  • split/ - training and test set split

    • training.tsv - raw file with all metadata of samples used for the training
    • test.tsv - raw file with all metadata of the test set samples
  • metadata-information/ - information about metadata

    • genres_hierarchy.json - hierarchical relationships between all genres
    • places_coordinates.json - coordinates of each place
    • places_hierarchy.json - hierarchical relationships of each place
    • vocabulary.json - vocabulary with the definitions of the terms evident in the dataset
  • mel-spectrograms/ - the mel-spectrograms of all music pieces following the naming convention {id}.npy

Using the trained models for inference

Requirements

  • FFmpeg
  • Python 3.8 or later
  • Create virtual environment and install requirements
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt

Get inference results

  1. Download trained models from here and put them under models/ directory.
  2. Place an input.wav file under inference/ or use a different name and adjust INPUT_FILE at run_inference.py accordingly.
  3. Run: python inference/run_inference.py
  4. The inference results will be printed in the terminal.

Citing the dataset

Please consider citing the following publication when using the dataset:

C. Papaioannou, I. Valiantzas, T. Giannakopoulos, M. Kaliakatsos-Papakostas and A. Potamianos, "A Dataset for Greek Traditional and Folk Music: Lyra", in Proc. of the 23rd Int. Society for Music Information Retrieval Conf., Bengaluru, India, 2022.

License

About

Lyra - A Dataset for Greek Traditional and Folk Music


Languages

Language:Python 100.0%