Audio, Music, and AI Lab at SUTD's repositories
Video2Music
Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model
awesome-MER
A curated list of Datasets, Models and Papers for Music Emotion Recognition (MER)
DisfluencySpeech
Resources for DisfluencySpeech
ai-audio-datasets-list
This is a list of datasets consisting of speech, music, and sound effects, which can provide training data for Generative AI, AIGC, AI model training, intelligent audio tool development, and audio applications. It is mainly used for speech recognition, speech synthesis, singing voice synthesis, music information retrieval, music generation, etc.
CVAE-Tacotron
Conditional VAE for Accented Speech Generation
genmusic_demo_list
a list of demo websites for automatic music generation research
singapore-music-classifier
Code for paper A dataset and classification model for Malay, Hindi, Tamil and Chinese music
DiffRoll
PyTorch implementation of DiffRoll, a diffusion-based generative automatic music transcription (AMT) model
nnAudio
Audio processing by using pytorch 1D convolution network
AudioLoader
PyTorch Dataset for Speech and Music audio
Conditional-Drums-Generation-using-Compound-Word-Representations
Conditional Drums Generation using Compound Word Representations
datasets_emotion
This repository collects information about different data sets for Music Emotion Recognition.
demucs_lightning
Demucs Lightning: A PyTorch lightning version of Demucs with Hydra and Tensorboard features
emotionweb
Website emotion guidance
FundamentalMusicEmbedding
Fundamental Music Embedding, FME
IJCNN2020_music_emotion
Regression-based Music Emotion Prediction using Triplet Neural Networks
Jointist
Official Implementation of Jointist
kylo-ren-app
Web interface for AI music generation models
LeadSheetGen_Valence
A novel seq2seq framework where high-level musicalities (such us the valence of the chord progression) are fed to the Encoder, and they are "translated" to lead sheet events in the Decoder. For further details please read and cite our paper:
MusIAC
music inpainting control
ReconVAT
ReconVAT: a semi-supervised automatic music transcription (AMT) model