RustAudio / dsp-chain

A library for chaining together multiple audio dsp processors/generators, written in Rust!

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

DSP Node Hierarchy and Parallel Rendering.

mitchmindtree opened this issue · comments

The DSP non-cyclical Node graph should allow us to consider rendering our audio in parallel (a task per node). This would be a great feat. in dsp if we could pull it off. There is perhaps a lot of potential for gain in performance with parallel audio, though this would require a lot of benchmarking and fiddling, and probably won't occur until after the main framework of rust-dsp is mostly complete.

The problem is that there are many ways to do this. You can use native threads, green threads or a work queue. We would have to think carefully before choosing a direction.

I wonder how much performance matters. It is usually good enough if the sound is generated in real time.

I'm happy to spend a lot of time refining the performance on this to make it as transparent as possible - The generative music engine I'm working on currently consumes most of the cpu (normally running between 20 and 100 oscillators / sometimes more depending on the song), and it's not even doing effects processing / automation yet :/ . I can defs look into pre-rendering parts, but to get it all happening contextually responsive in real-time will take some considering

perhaps at least it would be nice to think about audio on another thread, at least; its an easy self-contained chunk of processing where the game doesn't take results.. fire and forget, update control parameters..

audio does have its own needs with concurrency... genuinely realtime

@dobkeratops the audio processing should be able to be easily threaded atm if you spawn your SoundStream type on it's own task and set up your own channels (we're still thinking about ways that could make this easier) but agreed, any steps in performance gain are worth considering

Currently, the node system works as a large hierarchy. Each node holds a vector of input nodes. When audio_requested is called for one node, that node then calls audio_requested for all of it's input nodes (and so on). Each node then sums each of the input buffers (Vec<f32>) together, multiplying for amplitude and panning. This is definitely one area that may be hugely parallelised :-) We could have a condition that says:

if self.inputs.len() > 1 {
    // call audio_requested for each input within a unique task
}

This way, the rendering of every input to a node may occur in parallel, meaning a tree of inputs like this:

  • Master mixer
  • * 5 Groups/Busses
  • * ~5 instruments per group/bus
  • * ~3 synth voices per instrument
  • * ~3 oscillators per synth voice

could be processed in parallel across ~225 tasks.

Closing in favour of #89