jason9693 / MusicTransformer-pytorch

implementation of music transformer with pytorch (ICLR2019)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Music Transformer: Generating Music with Long-Term Structure

Abstract

  1. This Repository is perfectly cometible with pytorch

Contribution

  • Domain: Dramatically reduces the memory footprint, allowing it to scale to musical sequences on the order of minutes.
  • Algorithm: Reduced space complexity of Transformer from O(N^2D) to O(ND).

Preprocessing

  • In this repository using single track method (2nd method in paper.).

  • If you want to get implementation of method 1, see here .

  • I refered preprocess code from performaceRNN re-built repository..

  • Preprocess implementation repository is here.

Simple Start ( Repository Setting )

$ git clone https://github.com/jason9693/MusicTransformer-pytorch.git
$ cd MusicTransformer-pytorch
$ git clone https://github.com/jason9693/midi-neural-processor.git
$ mv midi-neural-processor midi_processor

Midi Download

$ sh dataset/script/{ecomp_piano_downloader, midi_world_downloader, ...}.sh

Prepare Dataset

$ python preprocess.py {midi_load_dir} {dataset_save_dir}

Trainig

$ python train.py -c {config yml file 1} {config yml file 2} ... -m {model_dir}

Hyper Parameter

  • learning rate : 0.0001
  • head size : 4
  • number of layers : 6
  • seqence length : 2048
  • embedding dim : 256 (dh = 256 / 4 = 64)
  • batch size : 2

Result

  • Baseline Transformer ( Green, Gray Color ) vs Music Transformer ( Blue, Red Color )
  • Loss

    loss

  • Accuracy

    accuracy

Generate Music

$ python generate.py -c {config yml file 1} {config yml file 2} -m {model_dir}

Generated Samples ( Youtube Link )

  • click the image.

About

implementation of music transformer with pytorch (ICLR2019)

License:MIT License


Languages

Language:Python 100.0%