adamfils / polyffusion

Polyffusion: A Diffusion Model for Polyphonic Score Generation with Internal and External Controls

Home Page:https://polyffusion.github.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Polyffusion: A Diffusion Model for Polyphonic Score Generation with Internal and External Controls

@inproceedings{polyffusion2023,
    author = {Lejun Min and Junyan Jiang and Gus Xia and Jingwei Zhao},
    title = {Polyffusion: A Diffusion Model for Polyphonic Score Generation with Internal and External Controls},
    booktitle = {Proceedings of the 24th International Society for Music Information Retrieval Conference, {ISMIR}},
    year = {2023}
}

Installation

pip install -r requirements.txt
pip install -e polyffusion/chord_extractor
pip install -e polyffusion

Some Clarifications

  • The abbreviation "sdf" means Stable Diffusion, and "ldm" means Latent Diffusion. Basically they are referring to the same thing.
  • prmat2c in the code is the piano-roll image representation.

Training

Preparations

  • The extracted features of the dataset POP909 can be accessed here. Please put it under /data/ after extraction.

  • The needed pre-trained models for training can be accessed here. Please put them under /pretrained/ after extraction.

Modifications

  • You can modify the parameters in the corresponding params_{}.py files under /polyffusion/params/.

Commands

python polyffusion/main.py --model [model] --output_dir [output_dir]

The models that can be selected (which make sense):

  • ldm_chd8bar: conditioned on latent chord representations encoded by a pre-trained chord encoder.
  • ldm_txt: conditioned on latent texture representations encoded by a pre-trained texture encoder.
  • ldm_chdvnl: conditioned on vanilla chord representations.
  • ldm_txtvnl: conditioned on vanilla texture representations.
  • ddpm: vanilla diffusion model from DDPM without conditioning.

Examples:

python polyffusion/main.py --model ldm_chd8bar --output_dir result/ldm_chd8bar

Trained Checkpoints

If you'd like to test our trained checkpoints, please access the folder here. We suggest to put them under /result/ after extraction for inference.

Inference

Please see the helping messages by running

python polyffusion/inference_sdf.py --help

Examples:

# unconditional generation of length 10x8 bars
python polyffusion/inference_sdf.py --model_dir=result/ldm_chd8bar --uncond_scale=0. --length=10

# conditional generation with guidance scale = 5, conditional chord progressions chosen from a song from POP909 validation set.
python polyffusion/inference_sdf.py --model_dir=result/ldm_chd8bar --uncond_scale=5.

# conditional iterative inpainting (i.e. autoregressive generation) (default guidance scale = 1)
python polyffusion/inference_sdf.py --model_dir=result/ldm_chd8bar --autoreg

# unconditional melody generation given accompaniment
python polyffusion/inference_sdf.py --model_dir=result/ldm_chd8bar --uncond_scale=0. --inpaint_from_midi=/path/to/accompaniment.mid --inpaint_type=above

# accompaniment generation given melody, conditioned on chord progressions of another midi file (default guidance scale = 1)
python polyffusion/inference_sdf.py --model_dir=result/ldm_chd8bar --inpaint_from_midi=/path/to/melody.mid --inpaint_type=below --from_midi=/path/to/cond_midi.mid

About

Polyffusion: A Diffusion Model for Polyphonic Score Generation with Internal and External Controls

https://polyffusion.github.io

License:MIT License


Languages

Language:Python 97.2%Language:Jupyter Notebook 2.8%Language:Shell 0.0%