betweentwomidnights / gary4live

this is gary4live. musicgen continuations for ableton.

Home Page:https://thecollabagepatch.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

this is gary

intro

my name's kev and i like building shit with audiocraft.

python script

we have a python script in here labeled gary.py

that will generate infinitely long remixes of the track you input based on bpm and some other stuff.

(instructions for python.gary here)

colab notebook

there's a colab notebook that allows us to play on easy mode too:

https://colab.research.google.com/drive/10CMvuI6DV_VPS0uktbrOB8jBQ7IhgDgL?usp=sharing

gary.colab has a buddy now...named Terry:

[here's the link](https://colab.research.google.com/drive/18TdDwcPK4szK4m7nBdtTIxLfdrZyFqfU#scrollTo=QrrimPAgIC2Qhttps://colab.research.google.com/drive/1I4NqrVpuBqEL0So8NopnQ3NIr-4RBdH7?usp=sharing a WIP thanks to SepticDNB from the MusicGen discord.

workflow

i find the coolest workflow is to let python.gary rock out for 5 min to your track and then use it more granularly inside ableton on its own outputs. there are gonna be rad parts that you wanna extend and there are definitely going to be parts you want to completely replace. he tries.


Max4Live

install

you're going to have to install Max. It comes with a 30-day trial.

By the time my trial is up, we'll probably have moved to iPlug2, JUCE, or both.

update: max4live trial ran out, and it turns out i can still edit the plugin from within ableton. thanks max/MSP.

TL;DR: Download Max here

after installation, open up ableton and add your base audiocraft folder to the browser on the left.

dependencies

open your terminal in the g4l directory and install node. python-shell is the only dependency it needs to run. node-for-max automatically takes care of the max-api that's used in our node.script. don't npm install max-api anywhere.

audiocraft installation

setup

you'll have to install all the stuff for audiocraft. mine's in C:\gary4live.

fine-tunes are located in:

C:\gary4live\audiocraft\models\vanya_aidnb_0.1 for example

Note: this is important for the python script (g4l.py) to find them.

had an error with my first gary4live which used an audiocraft venv located in C:\audiocraft-new\. the script was in C:\audiocraft-new\audiocraft\gary4live.py and the fine-tunes were in C:\audiocraft-new\audiocraft\audiocraft\models.

don't do this.

when entering in the path name inside ableton and feeding it to python, somehow identical paths got confused. the python script on its own was able to locate 'audiocraft/vanya_aidnb_0.1', but node/python-shell weren't able to find it somehow.

if m4l can't find stuff

you'll have the option to open the node script from within ableton to tweak the file paths that call the python script. or just find commentedout.js in the g4l folder and do it in vs code.

these are those lines:

function processAudio() {
    const scriptPath = "C:/gary4live/g4l.py";
    const pythonPath = 'C:/gary4live/scripts/python'; 
}

instructions for setting up your audiocraft env from the official repo are at the bottom of this page.

it's important to make sure you have everything pointing to the correct place.

hopefully soon it will be containerized better so that no one has to worry about filepaths.

using gary4live

(instructions coming in the form of text and YouTube)

gary4live is not easy to use. It's janky right now. That yellow thing that's s'posed to only light up when Ableton is playing gets confused and sometimes you gotta press the button in the upper left a couple times to make it member that it's stopped before you hit play to record the buffer.

flow

  1. find spot to record buffer.

    • use loop brace in case live.toggle messes up so you can restart the recording by pressing play again.
    • press [set myBuffer 1] to see the waveform fill up to 30 secs. Gary's output will be the length of what you give it.
  2. press [write C:\gary4live\myBuffer.wav]

    • just make sure you hit the [write C:\gary4live\myOutput.wav] button right as you fill the buffer. Even if you only want him to continue a small section, let it fill the buffer for much longer than the initial section you want continued. he's gonna fill it up.
  3. press [npm stuff]

    • to see green stuff from the right in the debug window. there's a tiny button next to the node.script object on the left that will update the rest of the debug window tabs.
  4. leave the textbox empty if you want to use the base model.

    • Type in 'vanya_aidnb_0.1' if you downloaded our collection and the python gods can find the directory.
  5. Check for green stuff

    • If you see only green stuff and you double click the [print commentedout.js] and see it indeed received 'vanya_ai_dnb_0.1' when you clicked outside the text window, you can now press the middle button.
      • right now it doesn't tell you a whole lot of information if everything goes well. you just gotta wait and keep pressing the 'replace' button. it will.
  6. press [replace C:\gary4live\myOutput.wav]

    • pressing [set myOutput 1] ensures the waveform that's displayed was updated.
  7. test gary's output

    • press the numbered buttons [0] [1] [-1] to hear gary's output. The [0] below them pauses it.

the tricky part: getting it back into the arrangement

  1. arm secondary track

    • change the dropdown from Ext. In to the track that gary is attached to. it's ableton stuff.
  2. record into session

    • i use a nifty button at the far right of gary that says [set session record 1] to just automatically do it.
      • just make sure when you press the button that you've clicked somewhere in your arrangement there's no audio yet.
      • i guarantee there's a zillion ways to do this better. This one works well enough for me.

coming soon

(a container that does a one-click installation like NAM does)

helps

we really need help. we have some rad help already from people like lyraaaa_ VANYA and SepticDB from the MusicGen Discord. They're absolute wizards with Colab.

for example, here's Lyra's amazing colab for making your own fine-tunes:

Lyra's Colab

you can then export them as two simple files and make a folder inside audiocraft/models for them.

then you can use gary's textbox to call them just by typing: vanya_aidnb_0.1 (for example) in the ableton text box before pressing the button.

the python script has hardcoded into it 'C:\gary4live\audiocraft\models'. if you're directory for where the models are is different, just change this line in gary4live.py:

# Hardcoded base path
base_path = "C:/gary4live/audiocraft/models/"

https://discord.gg/D5jANUZt to find more fine-tunes and a dope community of ppl

im really bad at githubbing. if someone wants to be made a contributer/take this over um, plz find me on https://twitter.com/@thepatch_kev

hear jams and see some tutorial stuff at https://youtube.com/@thecollabagepatch

also at https://thecollabagepatch.com where you can find other weird stuff.

i really want this to be a community project.

...below is the official installation instructions for audiocraft.

AudioCraft

docs badge linter badge tests badge

AudioCraft is a PyTorch library for deep learning research on audio generation. AudioCraft contains inference and training code for two state-of-the-art AI generative models producing high-quality audio: AudioGen and MusicGen.

Installation

AudioCraft requires Python 3.9, PyTorch 2.0.0. To install AudioCraft, you can run the following:

# Best to make sure you have torch installed first, in particular before installing xformers.
# Don't run this if you already have PyTorch installed.
pip install 'torch>=2.0'
# Then proceed to one of the following
pip install -U audiocraft  # stable release
pip install -U git+https://git@github.com/facebookresearch/audiocraft#egg=audiocraft  # bleeding edge
pip install -e .  # or if you cloned the repo locally (mandatory if you want to train).

We also recommend having ffmpeg installed, either through your system or Anaconda:

sudo apt-get install ffmpeg
# Or if you are using Anaconda or Miniconda
conda install "ffmpeg<5" -c conda-forge

Models

At the moment, AudioCraft contains the training code and inference code for:

  • MusicGen: A state-of-the-art controllable text-to-music model.
  • AudioGen: A state-of-the-art text-to-sound model.
  • EnCodec: A state-of-the-art high fidelity neural audio codec.
  • Multi Band Diffusion: An EnCodec compatible decoder using diffusion.

Training code

AudioCraft contains PyTorch components for deep learning research in audio and training pipelines for the developed models. For a general introduction of AudioCraft design principles and instructions to develop your own training pipeline, refer to the AudioCraft training documentation.

For reproducing existing work and using the developed training pipelines, refer to the instructions for each specific model that provides pointers to configuration, example grids and model/task-specific information and FAQ.

API documentation

We provide some API documentation for AudioCraft.

FAQ

Is the training code available?

Yes! We provide the training code for EnCodec, MusicGen and Multi Band Diffusion.

Where are the models stored?

Hugging Face stored the model in a specific location, which can be overriden by setting the AUDIOCRAFT_CACHE_DIR environment variable for the AudioCraft models. In order to change the cache location of the other Hugging Face models, please check out the Hugging Face Transformers documentation for the cache setup. Finally, if you use a model that relies on Demucs (e.g. musicgen-melody) and want to change the download location for Demucs, refer to the Torch Hub documentation.

License

  • The code in this repository is released under the MIT license as found in the LICENSE file.
  • The models weights in this repository are released under the CC-BY-NC 4.0 license as found in the LICENSE_weights file.

Citation

For the general framework of AudioCraft, please cite the following.

@article{copet2023simple,
    title={Simple and Controllable Music Generation},
    author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
    year={2023},
    journal={arXiv preprint arXiv:2306.05284},
}

When referring to a specific model, please cite as mentioned in the model specific README, e.g ./docs/MUSICGEN.md, ./docs/AUDIOGEN.md, etc.

About

this is gary4live. musicgen continuations for ableton.

https://thecollabagepatch.com

License:MIT License


Languages

Language:TypeScript 57.1%Language:JavaScript 39.0%Language:CSS 2.4%Language:Batchfile 0.6%Language:HTML 0.5%Language:EJS 0.3%