jsleep / wav2mid

Automatic Music Transcription with Deep Neural Networks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

wav2mid: Polyphonic Piano Music Transcription with Deep Neural Networks

Thesis by Jonathan Sleep for MS in CSC @ CalPoly

Abstract / Intro

There has been a multitude of recent research on using deep learning for music & audio generation and classification. In this paper, we plan to build on these works by implementing a novel system to automatically transcribe polyphonic music with an artificial neural network model. We show that by treating the transcription problem as an image classification problem we can use transformed audio data to predict the group of notes currently being played.

Background

Digital Signal Processing: Fourier Transform, STFT, Constant-Q, Onset/Beat Tracking, Auto-correlation Machine Learning: Artificial Neural Networks, Convolutional Neural Networks, Recurrent Neural Networks

Related Work on AMT

Design

The design for the system is as follows:

  • Pre-process our data into an ingestible format, fourier-like transform of the audio and piano-roll conversion of midi files.
  • Design a neural network model to estimate current notes from audio data
  • Use frame-wise (simpler) or onsets (faster)
  • Train on a large corpus of audio to midi
  • Evaluate it's performance on audio/midi pairs we have not trained on

Implementation

Libraries

  • Python - due to the abundance of music and machine learning libraries developed for it
  • librosa - for digital signal processing methods
  • pretty_midi - for midi manipulation methods
  • TensorFlow - for neural networks

Data

About

Automatic Music Transcription with Deep Neural Networks

License:MIT License


Languages

Language:Jupyter Notebook 99.4%Language:Python 0.6%