This is an LSTM-based Recurrent Neural Network trained on Piano Sonata No. 48 III Finale.
The training file can be found at kunstderfuge.com.
This is more of an exploration rather than a serious project.
-
Download and install Anaconda
-
Create and activate environment
conda update -n base conda conda create --name ml numpy scipy h5py jupyter keras source activate ml pip install --upgrade pip pip install music21 matplotlib
-
Install Tensorflow. For example
pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.6.0-py3-none-any.whl
-
Launch the Jupyter Notebook
jupyter notebook
-
Run the notebook
Jazzy Haydn.ipynb
-
Get the result from
output/my_music.midi
Most code is adopted from Ji-Sung Kim's work which is based on Evan Chow's jazzml. However, some parts are tailored specifically for his training data.
data_utils.py
: moved functiongenerate_music
to the jupyter notebook. Use variableN_tones
fornum_classes
everywhere.grammar.py
: returnNone
if either the listmeasure
orchords
has a length 0.inference_code.py
: Use the sameN_tones
as the one indata_utils.py
.music_utils.py
: moved theone_hot
helper function to the jupyter notebook.preprocess.py
: Multiple changes to the__parse_midi
function to make it compatible with our training data.
We adopted the model from Coursera. It is a one-to-many LSTM RNN with 64 memory cells. It uses the output of the previous cell as the input of the current cell.
We train this model with 100 epoches, and generate 50 new values from the network.