nir / jupylet

Python game programming in Jupyter notebooks.

Home Page:https://jupylet.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

get realtime soundcard audio as input for shadertoy shaders

cyberic99 opened this issue · comments

Hi,

I have tried some shadertoys examples.

It seems the audio input can be the 'output' from the sounds playing in jupylet, but is there a way to use realtime audio input from the soundcard?

Thank you

If you can read the sound card audio as an array of samples you should be able to feed it to the shadertoy.

The get_shadertoy_audio() function accepts an optional data parameter that may contain arbitrary audio samples (I believe in the range [-1,1]):

def get_shadertoy_audio(amp=1., length=512, buffer=500, data=None, channel_time=None):

You can see how it is used in the piano example:

st0.set_channel(0, *get_shadertoy_audio(amp=5))

In the piano example the data parameter is not used so the function reads audio data from the jupylet audio buffer instead.

I ended up with a semi-working solution.

I added this in the piano example:

sound_input = np.zeros((512, 2))

import soundcard as sd
import _thread

def _rec():
    global sound_input
    m = None
    for m in sd.all_microphones(include_loopback=True):
        if m.isloopback:
            break
    if m is None:
        m = sd.default_microphone()
    with m.recorder(samplerate=48000) as mic:
        while True:
            sound_input = mic.record(numframes=512)

_rec_tid = _thread.start_new_thread(_rec, ())

And I call get_shadertoy_audio like this:

si = sound_input.copy()
st0.set_channel(0, *get_shadertoy_audio(amp=5, data=si, length=512))

It is kind of working, but the waveform is a bit shaky... I guess it is due to lack of synchronization between the soundcard callback and the render loop...

do you think I should call render() inside the audio capture loop?

thanks for your hints

The shakiness may be due to slight mistimings in the operation of the various moving parts (e.g. your recording thread) - I the get_shadertoy_audio() function I fix it by calling get_correlation() in:

ix = get_correlation(a0.mean(-1), buffer)

It finds a subset of the input buffer that minimizes that shakiness (by maximizing correlation with a previous buffer). You can use it too on your own, on your buffer to minimize that shakiness. let me know if it works.

In the next release I will make it an option to apply it to the user supplied buffer.

Hi,

First of all, thank you for your detailed answer and for taking some time to look at this issue.

yeah you're right, it is much better when using get_correlation(). Maybe you could add it as an option in get_shadertoy_audio() too ?

But regarding capturing the audio, I think the best way would be call render() in the audio callback.

I have looked at the code to see how I could do it, but I'm not sure there is an easy way to call render() outside of the run()

Am I correct?

I don't think you can do that. render() is a callback function called by the async loop.

Added new parameter to auto-correlate user supplied audio buffers:

correlate=True,

Thanks for this bug report!