LabSound / LabSound

:microscope: :speaker: graph-based audio engine

Home Page:http://labsound.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to build LabSound projects with Emscripten?

DanieleCapuano opened this issue · comments

I succeded building examples using
emcmake cmake -DCMAKE_INSTALL_PREFIX='../labsound-distro-emcc' ..
cmake --build . --target install

but when I try to run the output .js file I get a console error:
"RtApiDummy: This class provides no functionality".

A WebAudio back end would need to be written for that. If miniaudio works with Emscripten, then the miniaudio back end might work. The logic at the top of this file: https://github.com/LabSound/LabSound/blob/master/cmake/LabSound.cmake would need to be modified to know how build for Emscripten. If this works, a PR would be welcome!

I am thinking direct port native Web Audio API

Yes, that makes sense.

I am very interested in this. Microsoft has a webgl framework, BabylonJS, that is also has community contributors. (I write / support a Blender exporter). They also have been developing something called BabylonNative, https://github.com/BabylonJS/BabylonNative. It allows running the Javascript that comprises a 3D scene as a Native application on Windows, Android, Linux, MacOS & iOS devices / XR headsets.

Getting web audio is a glaring piece missing. I have started to try to wrapper Labsound into its add-in facility, but did not get very far. As there is a javascript VM inside the app, the enscripten route sounds even cleaner.

Integration into BabylonJS is an interesting idea. There are already some js bindings for LabSound, perhaps the node3d bindings by @raub could provide some insight into how to do it: https://github.com/node-3d/webaudio-raub

Thanks, pointing out that N-API implementation was really helpful. I mentioned this to Microsoft, and they really liked it. The preliminary from them was this is just what is needed in the src & js directories. The person wondered how this would fit into their cmake / sub-module build framework.

Think the project lead is out this week due to holiday added onto with vacation, who will need to look at it. Sounds like they might do everything themselves, which is fine by me.

thanks, again.

Sound good! I'm more than happy to accept PRs or Issues from them here.

This issue can move forward when this issue is resolved: WebAudio/web-audio-api#2442 If the emscripten patches by @juj noted there land, an emsc backend for LabSound will be straight forward.

It seems like it should be possible to create a ScriptProcessorNode or AudioWorkletNode, where you pull the graph and render the buffer in some call exposed though web-assembly and copy it, during the onaudioprocess callback on the web node.

That's a very interesting idea. I see some things around WASM and AudioWorkletNode, e.g. https://emscripten.org/docs/api_reference/wasm_audio_worklets.html ~ I'm very curious how that's implemented. It looks like it should be possible to use LabSound in conjunction with the methods they describe there; although at first glance I can't tell how much work is involved.

Another option might be to simply make a OpenAL backend, as emscripten has it's own OpenAL port that works with web audio. Using double buffering and streaming buffers is probably the way to go here, which would increase the latency, but might be worth looking into.

SDL_Audio might also be a viable backend for web.

commented

The bug reported to Web Audio in WebAudio/web-audio-api#2442 is not necessary for using Audio Worklets with Emscripten. Support for Audio Worklets with Emscripten has already landed, and the Web Audio API issue 2442 is more of a performance optimization, simplification and code size improvement to the existing integration.

As I recall, an audio worklet will require the whole process to run in that "worker context", making any communication with the main thread a bit complicated.

Ideally the emscripten code runs in the main thread, and expose the audio graph through a SharedArrayBuffer that the worklet can access to render the graph.

This way you can make a whole application using opengl/webgl and LabSound, and simply pick a native target or emscripten at compile time, and everything should just work, even c++ threading.

Some glue will need to be added on the js side to facilitate this of course, but it could be added with emscripten.

Is there any way to compile LabSound to wasm now? waiting for good method

Reviewing all the options, a backend based on libSDL seems like right answer. I think emsc is well integrated with sdl2. If anyone is working on an sdl backend and has got progress to share, that would be most welcome.

Looks like a backend won't be too difficult. I'm not able to start on this at the moment, but if someone wants to have a look, I think the miniaudio back end could be modified easily, a trivial SDL set up looks like this:

#include "sdlkit.h"

static void SDLAudioCallback(void *userdata, Uint8 *stream, int len)
{
	if (playing_sample && !mute_stream)
	{
		unsigned int l = len/2;
		float fbuf[l];
		memset(fbuf, 0, sizeof(fbuf));
		FetchSamples(l, fbuf, NULL);
		while (l--)
		{
			float f = fbuf[l];
			if (f < -1.0) f = -1.0;
			if (f > 1.0) f = 1.0;
			((Sint16*)stream)[l] = (Sint16)(f * 32767);
		}
	}
	else 
		memset(stream, 0, len);
}

void initSDLAudio() {
	SDL_AudioSpec des;
	des.freq = 44100;
	des.format = AUDIO_S16SYS;
	des.channels = 1;
	des.samples = 512;
	des.callback = SDLAudioCallback;
	des.userdata = NULL;
	VERIFY(!SDL_OpenAudio(&des, NULL));
	SDL_PauseAudio(0);
}

I‘ll try it soon, thanks