edimuj / cordova-plugin-audioinput

This iOS/Android Cordova/PhoneGap plugin enables audio capture from the device microphone, by in near real-time forwarding audio to the web layer of your application. A typical usage scenario for this plugin would be to use the captured audio as source for a web audio node chain, where it then can be analyzed, manipulated and/or played.

Home Page:https://github.com/edimuj/app-audioinput-demo

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Wav file not generating correctly

akadebnath opened this issue · comments

Looks like this plugin no longer works.
I tried integrating the codes in my app, also downloaded the sample app and generated APK file from it.
It always creates 1KB .wav file whoch does not play.
It seems audioDataBuffer is created correctly, but WavAudioEncoder cannto convert it to a proper audio file.

I am in the same thing as you. If you can fix it, please let me know

I am in the same thing as you. If you can fix it, please let me know

I gave up and switched to cordova-plugin-media

I have just recently integrated the app in my ionic project with ionic 5 and capacitor.
It works nicely, except for a little crackling in ios, which might be due to misconfiguration on my part.
I can make wav files.

I have fixed the crackling, if anybody needs help, write me :)

commented

Dear @TobiasFPJS , Can you please share your code that work on ios. I able generate wav in android but not success in IOS.

Fyi. it is not pretty, but here it is:
import { BaseAudioManager, AudioManager } from './BaseAudiomanager';

declare let audioinput: any;
declare let WavAudioEncoder: any;

export class AudioManagerIOS extends BaseAudioManager implements AudioManager {
totalReceivedData: number = 0;
iosCaptureCfg: any;
onInput: any;
onInputError: any;
audioManagerId: string = "";
removed: boolean = false;
eventListenersAdded: boolean = false;
private recording: boolean = false;

constructor(lights: any, window: any) {
super(lights, window);
this.audioManagerId = Math.random().toString(36).substring(7);
this.iosCaptureCfg = {
sampleRate: 48000,
bufferSize: 2048,
channels: audioinput.CHANNELS.MONO,
format: audioinput.FORMAT.PCM_16BIT,
streamToWebAudio: false,
audioSourceType: audioinput.AUDIOSOURCE_TYPE.DEFAULT
}
}

record(recordToBlob: boolean = true) {
this.hasRecorded = false;
this.initIOSAudio(recordToBlob);
if (recordToBlob) {
this.recording = true;
}
audioinput.checkMicrophonePermission((hasPermission) => {
console.log(hasPermission);
if (hasPermission) {
this.startCaptureIOS();
} else {
audioinput.getMicrophonePermission((hasPermission) => {
if (hasPermission) {
this.startCaptureIOS();
} else {
alert('Without consent to use the microphone, this app will not function.');
}
});
}
});
}

initIOSAudio(recordToBlob: boolean = true): void {
const audioCtx = this.newAudioCtx();
const gainNode = audioCtx.createGain();
gainNode.connect(audioCtx.destination);

const iosAudioBuffer = audioCtx.createBuffer(this.iosCaptureCfg.channels, (this.iosCaptureCfg.sampleRate * 0.1), this.iosCaptureCfg.sampleRate);

const analyserNode = new AnalyserNode(audioCtx, this.analyserOptions);
const self = this;
this.onInput = (evt: any) => {
  if (self.removed) {
    return
  }
  try {
    if (evt && evt.data) {
      // Push the data to the audio queue (array), for saving later
      if (recordToBlob) {
        self.totalReceivedData += evt.data.length;
        if (!self.chunks) {
          self.chunks = evt.data;
        } else {
          self.chunks = self.chunks.concat(evt.data);
        }
      }
      // Push data to the audiodatabuffer for immediate analysis;
      iosAudioBuffer.getChannelData(0).set(evt.data);

      const source = audioCtx.createBufferSource();
      source.buffer = iosAudioBuffer;
      source.connect(analyserNode);
      source.start();
      source.onended = () => {
        source.disconnect();
      }
      const analyserAudio = new Uint8Array(analyserNode.frequencyBinCount);
      analyserNode.getByteFrequencyData(analyserAudio);
      self.audiovisualizer.updateLights(analyserAudio, audioCtx.sampleRate / analyserNode.fftSize);
    }
  } catch (ex) {
    console.log(ex);
    alert("Error - Microphone already in use.");
  }
}

this.onInputError = (error) => {
  console.log(error);
  alert("Error - Microphone already in use elsewhere.");
}
this.addEventlisteners();
audioCtx.onstatechange = () => {
  if (audioCtx?.state === "closed") {
    gainNode.disconnect();
    analyserNode.disconnect();
  }
};

};

private addEventlisteners() {
// Make sure to unsubscribe to eventlisteners when initializing new iosAudio
if (!this.eventListenersAdded) {
this.eventListenersAdded = true;
} else {
this.removeEventListeners();
}
window.addEventListener('audioinput', this.onInput, false);
window.addEventListener('audioinputerror', this.onInputError, false);
}

private removeEventListeners() {
window.removeEventListener('audioinput', this.onInput, false);
window.removeEventListener('audioinputerror', this.onInputError, false);
}

private async startCaptureIOS() {
this.chunks = [];
if (!audioinput.isCapturing()) {
audioinput.start(this.iosCaptureCfg);
}
}

isRunning() {
return audioinput.isCapturing();
}

isRecording(): boolean {
return this.recording;
}

stop() {
if (this.isRecording()) {
if (audioinput.isCapturing()) {
audioinput.stop();
}
this.removeEventListeners();
try {
const encoder = new WavAudioEncoder(this.iosCaptureCfg.sampleRate, this.iosCaptureCfg.channels);
encoder.encode([this.chunks]);
this.blob = encoder.finish("audio/wav");
this.totalReceivedData = 0;
} catch (e) {
alert("stopCapture exception: " + e);
}
this.recording = false;
this.hasRecorded = true;
} else if (this.isRunning) {
if (audioinput.isCapturing()) {
audioinput.stop();
}
}
}

reset() {
this.hasRecorded = false;
this.blob = null;
this.chunks = [];
}

remove() {
this.removeEventListeners();
this.removed = true;
}
}

and I have added this to my index.html to have it encode to wav:
<script type="text/javascript" src="assets/WavAudioEncoder.min.js"></script>
And below is that script.
`(function (self) {
var min = Math.min,
max = Math.max;

var setString = function (view, offset, str) {
var len = str.length;
for (var i = 0; i < len; ++i)
view.setUint8(offset + i, str.charCodeAt(i));
};

var Encoder = function (sampleRate, numChannels) {
this.sampleRate = sampleRate;
this.numChannels = numChannels;
this.numSamples = 0;
this.dataViews = [];
};

Encoder.prototype.encode = function (buffer) {
var len = buffer[0].length,
nCh = this.numChannels,
view = new DataView(new ArrayBuffer(len * nCh * 2)),
offset = 0;
for (var i = 0; i < len; ++i)
for (var ch = 0; ch < nCh; ++ch) {
var x = buffer[ch][i] * 0x7fff;
view.setInt16(offset, x < 0 ? max(x, -0x8000) : min(x, 0x7fff), true);
offset += 2;
}
this.dataViews.push(view);
this.numSamples += len;
};

Encoder.prototype.finish = function (mimeType) {
var dataSize = this.numChannels * this.numSamples * 2,
view = new DataView(new ArrayBuffer(44));
setString(view, 0, 'RIFF');
view.setUint32(4, 36 + dataSize, true);
setString(view, 8, 'WAVE');
setString(view, 12, 'fmt ');
view.setUint32(16, 16, true);
view.setUint16(20, 1, true);
view.setUint16(22, this.numChannels, true);
view.setUint32(24, this.sampleRate, true);
view.setUint32(28, this.sampleRate * 4, true);
view.setUint16(32, this.numChannels * 2, true);
view.setUint16(34, 16, true);
setString(view, 36, 'data');
view.setUint32(40, dataSize, true);
this.dataViews.unshift(view);
const blob = new Blob(this.dataViews, { type: 'audio/wav' });
this.cleanup();
return blob;
};

Encoder.prototype.cancel = Encoder.prototype.cleanup = function () {
delete this.dataViews;
};

self.WavAudioEncoder = Encoder;
})(self);`

and one last note, my package.json contains this, it is very important that you use 1.0.1 as well:
"cordova-plugin-audioinput": "1.0.1",

commented

Thanks a lot @TobiasFPJS

Hello, I was trying the code on Android, it was said the wav file is successfully generated, but the audio is always 1kb, and cannot play back.
@TobiasFPJS @hasib Can you give some tips how to make it work? Thank you very much

Solution to the problem is to merge the array of audioDataBuffer to a single float32Array, which is expected by WavAudioEncoder.min.js.

var data = flattenArray(audioDataBuffer);
encoder.encode([data]);

....

function flattenArray(chunks) {
  const frames = chunks.reduce((acc, elem) => acc + elem.length, 0);
  const result = new Float32Array(frames);
  
  let currentFrame = 0;
  chunks.forEach((chunk) => {
    result.set(chunk, currentFrame);
    currentFrame += chunk.length;
  });
  
  return result;
}