This is the source code for the experiments related to the paper Unsupervised Music Source Separation Using Differentiable Parametric Source Models.
It contains a re-implementation of parts of the DDSP library in PyTorch. We added a differentiable all-pole filter which can be parameterized by line spectral frequencies or reflection coefficients.
Please cite the paper, if you use parts of the code in your work.
The following packages are required:
pytorch=1.6.0
matplotlib=3.3.1
python-sounddevice=0.4.0
scipy=1.5.2
torchaudio=0.6.0
tqdm=4.49.0
pysoundfile=0.10.3
librosa=0.8.0
scikit-learn=0.23.2
tensorboard=2.3.0
resampy=0.2.2
pandas=1.2.3
These packages can be found using the conda-forge and pytorch channels. Python 3.7 or 3.8 is recommended. From a new conda environment:
conda update conda
conda config --add channels conda-forge
conda config --set channel_priority strict
conda config --add channels pytorch
conda config --add channels pytorch
conda install pytorch=1.6.0
conda install matplotlib=3.3.1 python-sounddevice=0.4.0 scipy=1.5.2 torchaudio=0.6.0 tqdm=4.49.0 pysoundfile=0.10.3 librosa=0.8.0 scikit-learn=0.23.2 tensorboard=2.3.0 resampy=0.2.2 pandas=1.2.3 configargparse=0.13.0
python train.py -c config.txt
python train_u_nets.py -c unet_config.txt
python eval.py --tag 'TAG' --f0-from-mix --test-set 'CSD'
Note : 'TAG' is the evaluated model's name. (Example: unsupervised_2s_satb_bcbq_mf0_1)
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 765068.
Copyright 2021 Kilian Schulze-Forster of Télécom Paris, Institut Polytechnique de Paris. All rights reserved.