gilesbowkett / RealtimeAudioClassification

Using spectrograms and convolutional neural networks to listen to environment sounds.

Home Page:https://github.com/FAR-Lab/RealtimeAudioClassification/wiki

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Github Cheat Sheet

Realtime Audio Classification for Musicians

(As an homage to Tensorflow for Poets.)

Overview

In this workshop, we will teach you how to design audio classifiers using neural nets. We will guide you through the steps of collecting and organizing data, generating spectrographs, training a network, and the using that network to detect audio in realtime. We will use Jupyter notebooks, Python3, pyTorch, and Librosa to play with neural nets that can detect different music and different audio sources.

Provisional Workshop Schedule

Time Monday Tuesday Wednesday Thursday Friday
9am Introductions Review/Q&A Review/Q&A Review/Q&A Review/Q&A
10-noon Neural Nets Collecting & Analyzing Sounds Designing Interaction Applications of AI for Sound Project time
noon-1:30 Lunch Lunch Lunch Lunch Lunch
1:30-3:30 Lab Setup Home Sounds Dataset Activity Wizard Lab Final Project Project Time/ Show and Tell
3:30-5pm Cats & Dogs Lab Home Sounds Dataset Activity Plotting Final Project Final Project Happy Hour

See the workshop Wiki for Lab and Lectures.

About

Using spectrograms and convolutional neural networks to listen to environment sounds.

https://github.com/FAR-Lab/RealtimeAudioClassification/wiki


Languages

Language:Jupyter Notebook 98.5%Language:Python 1.5%