acen20 / cnn-tf-keras-audio-classification

Feature extraction from sound signals along with complete CNN model and evaluations using tensorflow, keras and, librosa for MFCC generation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CNN for sound classification

Note: Uncomment the MFCC extraction block to work with your own sounds. Otherwise, I have also provided a sample dataset i.e. dataset.npy.

About dataset.npy

  1. Cepstral Coefficients of dimension (178,44,13) where 178 are number of audio, 44 is the number of samples for each audio and 13 are number of Coefficients.
  2. Labels of dimension (178,)

Using Librosa for Feature Extraction

Generates 44x13 2D image for each sound signal and a Target column (Label)

REVIEW THE GRAPHS BELOW

Quick guide

  1. For each type of sound, create a directory or folder in the audio/ directory.
  2. To see what I mean by that, explore the audio folder in this repository. I have placed an audio as an example.
  3. After you are done making directories for sounds,
    place this script in the directory as I have placed it in this repository.

Following is the sequence of transitions that the signal goes through until MFCCs are generated

Waveform

Waveform

Fourier Transform

FFT

Power Spectrum

Power Spec

Spectrogram

spec

Log Spectrogram

log spec

MFCCs (Mel Frequency Cepstral Coefficients)

MFCCs

Key points

  1. This implementation rejects the audio signals having lower sample rate than 22050.
  2. Number of MFCCs selected are 13.
  3. Hop length across the signal is 512.
  4. Number of fast fourier transformation is 2048.

About

Feature extraction from sound signals along with complete CNN model and evaluations using tensorflow, keras and, librosa for MFCC generation


Languages

Language:Jupyter Notebook 100.0%