edimuj / cordova-plugin-audioinput

This iOS/Android Cordova/PhoneGap plugin enables audio capture from the device microphone, by in near real-time forwarding audio to the web layer of your application. A typical usage scenario for this plugin would be to use the captured audio as source for a web audio node chain, where it then can be analyzed, manipulated and/or played.

Home Page:https://github.com/edimuj/app-audioinput-demo

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Getting the volume level

yydad117 opened this issue · comments

Hi, I like your plugin and it is possible to get the volume level directly from the microphone? Or detect the volume change with the plugins?

commented

It of course depends how accurate you need it to be, but you can get the current volume level registered by the microphone from the plugin.

One way of doing it is to use the Web Audio (AudioNode) based method that the plugin supports. The following code should be read as psuedo-code since I just wrote it here in this issue and have not actually run it live. It can contain some errors:

// Start the plugin in Web Audio mode
audioinput.start({
    streamToWebAudio: true
});

// Get the Web Audio Context from the plugin
var audioContext = audioinput.getAudioContext();

// Create and init a Web Audio Analyser node
 analyser = audioContext.createAnalyser();
 analyser.smoothingTimeConstant = 0.3;
 analyser.fftSize = 1024;

processorNode = audioContext.createScriptProcessor(2048, 1, 1);

 processorNode.onaudioprocess = function() {
        var array =  new Uint8Array(analyser.frequencyBinCount);
        analyser.getByteFrequencyData(array);
        var averageVolume = getAverageVolume(array)
         console.log('Average volume in this chunk was: ' + averageVolume); 
 }

 function getAverageVolume( array ) {
        var amplitudeSum = 0;

        for (var i = 0; i < array.length; i++) {
            amplitudeSum += array[i];
        }

        return amplitudeSum / array.length;
  }

// Connect the audioinput to the analyser node
audioinput.connect( processorNode );

If you don't want to use Web Audio, you can do the same thing using the audioinput event based method, since you'll receive chunks (arrays) of samples from the microphone, which the getAverageVolume function can handle.

I hope this helps.

Thanks for your reply.

Unfortunately, I tried your code, and it is not working.
There is no any console.log output when I give voice to the microphone.

I worked with turned streamToWebAudio to false and used the audioinput event based. However, it gave a very large time delay, so I want to try your method but not working.

Can you give me more examples?

Thanks,
Hugo Lo

commented

Do you have some code I can look at?

commented

Closing this now, since there haven't been any activity for a month.

I will note for future readers, this method did work for me.

Hey @edimuj I'm getting the same problem as @yydad117. The array has all 0s in it. :( Please help @edimuj or @dested, this would solve a ton of problems for me if it works

commented

Do you hear the captured sound if you connect the audioinput source or the scriptprocessor node to the audiocontext destination? I also remember something about the scriptprocessor node having to be connected to the destination or another node (that is connected to the destination) in order for it to work correctly (Chrome/Blink): http://stackoverflow.com/questions/23348109/why-must-a-scriptprocessornode-be-connected-to-a-destination

Another thing: Why does your code first fill the array with getByteTimeDomainData and then again by getByteFrequencyData?

@edimuj Thanks for your response!
Yes I hear captured sound.
I got rid of the script processor node and it works.
I was trying different graphs, now I only use frequency data thanks!

One question - How to get rid of playback of captured sound? It interferes with recording. I am not using this -
audioinput.connect(audioinput.getAudioContext().destination);

commented

The only possible way to get this plugin to perform playback is using by the connect function you mentioned above and connecting it directly or indirectly to audioContext.destination. So if you don't use that as well as not using a scriptprocessor, I must see your code to be able to understand why audio is being played through the speakers. Anyway, since scriptprocessors need to be connected to the destination in order to work, the easiest way to silence it is to create a gain node and set it's value to zero and then connect that gain node to the destination.

commented

Doing some housekeeping: Since there hasn't been any action regarding this issue for a time now, I'm closing it.

Thank you for your help!

I have tried to convert your pseudocode, but get some errors.
As described above, I get back only zeros in the array. From my understanding, the "processorNode" object is only responsible for repeating the volume limit. Have I understood that correctly?

Here is my code:

      window.audioinput.start({
        streamToWebAudio: true
      });
      var audioContext = window.audioinput.getAudioContext();

      
      
      // Create and init a Web Audio Analyser node
      var analyser = audioContext.createAnalyser();
      // analyser.smoothingTimeConstant = 0.3;
      analyser.fftSize = 1024;
            
      
      var array =  new Uint8Array(analyser.frequencyBinCount);
      analyser.getByteFrequencyData(array);
      console.log(array);

I get the following issue:

Uint8Array(1024) [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…]

Do you have an example for me that works, please?

In other examples, to calculate the volume level required a stream. Is there a way to get a stream through the plugin?

Example


audioContext = new AudioContext();
// ask for an audio input
        navigator.getUserMedia(
        {
            "audio": {
                "mandatory": {
                    "googEchoCancellation": "false",
                    "googAutoGainControl": "false",
                    "googNoiseSuppression": "false",
                    "googHighpassFilter": "false"
                },
                "optional": []
            },
        }, onMicrophoneGranted);

function onMicrophoneGranted(stream) {
    // Create an AudioNode from the stream.
    mediaStreamSource = audioContext.createMediaStreamSource(stream);

    // Create a new volume meter and connect it.
    meter = createAudioMeter(audioContext);
    mediaStreamSource.connect(meter);

}

createAudioMeter is an added function
https://github.com/cwilso/volume-meter/blob/master/volume-meter.js

I have the problem that there is no console.log output as well.
All other function I tried are working but your example is not working:

// Start the plugin in Web Audio mode
audioinput.start({
    streamToWebAudio: true
});

// Get the Web Audio Context from the plugin
var audioContext = audioinput.getAudioContext();

// Create and init a Web Audio Analyser node
 analyser = audioContext.createAnalyser();
 analyser.smoothingTimeConstant = 0.3;
 analyser.fftSize = 1024;

processorNode = audioContext.createScriptProcessor(2048, 1, 1);

 processorNode.onaudioprocess = function() {
        var array =  new Uint8Array(analyser.frequencyBinCount);
        analyser.getByteFrequencyData(array);
        var averageVolume = getAverageVolume(array)
         console.log('Average volume in this chunk was: ' + averageVolume); 
 }

 function getAverageVolume( array ) {
        var amplitudeSum = 0;

        for (var i = 0; i < array.length; i++) {
            amplitudeSum += array[i];
        }

        return amplitudeSum / array.length;
  }

// Connect the audioinput to the analyser node
audioinput.connect( processorNode );

Is there any solution?

Working code is:

// Global Variables for Audio
	var audioContext;
	var analyser;
	var processorNode;
	var array;     // array to hold time domain data
	
	
	// Start the plugin in Web Audio mode
	audioinput.start({
		streamToWebAudio: true
	});
	
	// Get the Web Audio Context from the plugin
	audioContext = audioinput.getAudioContext();
	
	// Create and init a Web Audio Analyser node
	analyser = audioContext.createAnalyser();
	analyser.smoothingTimeConstant = 0;
	analyser.fftSize = 1024;
	
	processorNode = audioContext.createScriptProcessor(2048, 1, 1);
	array =  new Uint8Array(analyser.frequencyBinCount);
	
	// Connect the audioinput to the analyser node
	audioinput.connect( analyser );
	analyser.connect( processorNode );
	processorNode.connect(audioContext.destination);
	
	
	processorNode.onaudioprocess = function() {
			analyser.getByteFrequencyData(array);
			var averageVolume = getAverageVolume(array)
			console.log('Average volume in this chunk was: ' + averageVolume); 
	 }


  function getAverageVolume( array ) {
	var amplitudeSum = 0;

	for (var i = 0; i < array.length; i++) {
		amplitudeSum += array[i];
	}

	return amplitudeSum / array.length;
}

Hi desmeit, does your example process volume levels as they are happening? or only until you press stop??