Matteo Fabbri's repositories
Audio-Signal-Processing-for-Music-Applications
Material and assignments of the 'Audio Signal Processing for Music Applications' course by Xavier Serra at the Sound and Music Computing Master of the Music Technology Group, UPF, Barcelona.
Large-scale-Music-data-Audio-content-based-playlists
A system for generating music playlists based on the results of audio content analysis. The MusAV dataset is used as a music audio collection, music descriptors are extracted using Essentia, and a simple user interface is used to generate playlists based on these descriptors.
Neural-Texture-Sound-Synthesis-with-physically-driven-continuous-controls
Neural Texture Sound Synthesis exposing physically-driven continuous controls using synthetic-to-real unsupervised Domain Adaptation
Neural-Texture-Sound-synthesis---data-sets
Synthetic sounds datasets and real sounds datasets of waterflow sounds for the repo 'Neural-Texture-Sound-Synthesis-with-physically-driven-continuous-controls'.
High-level-timbral-features-extractor
Convolutional-based supervised regression task for extracting high level timbral features from drums sound files, useful to condition a real time Neural Sound Synthesiser on continuous intuitive controls.
Large-scale-Music-data-Collaborative-Filtering-with-ListenBrainz
A model which can identify similar musical artists based on user listening history from ListenBrainz.
Neural-Network-Architectures-for-supervised-Tonic-and-Instrument-classification
Various Neural Network Architectures for Supervised Tonic classification using the mridangam_stroke dataset, and supervised instrument classification on the TinySOL dataset.
Neural-Texture-Sound-synthesis---trained-Neural-Networks
PyTorch-based trained Neural Networks for the Neural-Texture-Sound-Synthesis-with-physically-driven-continuous-controls project
rasa-chatbot-assistant
Chatbot assistant for the DTIC Department of Pompeu Fabra University in Barcelona, implemented using RASA
Generative-Music-in-Ableton-Live
Max 4 Live device generating melodies and exposing several ways of controls over probabilities and numerical ranges of pitches, harmonies and rhythms. Inspired from Patter by Adam Florin
Video-sonification
A Max MSP-based video sonifier, mapping RGB pixel values to DSP parameters. Used by Sound Designer Matteo Bendinelli for PhACES, a crowdsourcing interactive exhibit by Alessandro Cracolici (https://www.phest.info/alessandro-cracolici), for the 2022 edition of the PhEST International Festival of Art and Photography of Monopoli, Puglia, Italy.
Algorithmic-Interactive-Music-on-the-Web-Browser
Web-app for automatic generation and computer-assisted manipulation of melodic and rhythmic musical patterns, with built-in synthesisers, transport and BPM controls.
Computational-Musicology-_-Quantify-and-qualify-heterophony-in-Jingju-music
Quantitative analysis of pitch and rhythmic similarity between arbitrary .xml scores ; correlation analysis in the Jingju Music repertoire, as degree of heterophony between different Banshis. Using the Music21 python library (http://web.mit.edu/music21/).
Metiu-Metiu
Config files for my GitHub profile.
Neural-MIDI-Drum-Patterns-generator-and-interpolator
Pure Data GUI and python/pytorch backend, neural MIDI Drum patterns generator capable of interpolating between specific sets of latent variables. From the 2023 course 'Computational Music Creativity' of the Sound and Music Computing Master at UPF University in Barcelona.
Polyphonic-Karplus-Strong-synthesiser-and-2-tracks-32-steps-sequencer
Polyphonic Karplus Strong synthesiser and 2 tracks 32 steps sequencer implemented with Pure Data.
Using-spectral-analysis-and-mapping-to-enhance-the-harmonicity-of-a-sound
Research MATLAB Project which analyses Inharmonic sounds, tries to find its most likely fundamental frequency and harmonic template, and performs spectral mapping to make it sound more harmonic while retaining most of its sound quality.
Gender-Classification-from-audio-files
Gender Classification from audio files using the LibriSpeech dataset