juanmc2005 / diart

A python package to build AI-powered real-time audio applications

Home Page:https://diart.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Implement voicefixer for audio enhancement

thieugiactu opened this issue · comments

Is there any way to implement voicefixer to speaker diarization pipeline?
The package takes a wav file as input and gives a upsampled 44100kHz wav file as output, but that could be easily modified to taking and giving audio numpy array.
Since the speaker embeddings depend greatly on the quality of the input audio and in the real world environment, there are a lot of factor that can affect the quality of the audio such as the quality of the recording device, speaker voice change overtime,... so I think having some audio quality enhancement is a must.

Hi @thieugiactu, that's an interesting idea.

To do this in a streaming way we would need access to a pre-trained model for the enhancement task, then implement a SpeechEnhancementModel and SpeechEnhancement block. This would allow you to build a pipeline where you call SpeechEnhancement before sending it to SpeakerSegmentation and SpeakerEmbedding.

In order to make this compatible with SpeakerDiarization (or any pipeline for that matter), we could implement a method like add_audio_preprocessors() to prepend any audio transformations (e.g. enhancement, resampling, volume change, etc.)

I will give it a try. If I have any questions regarding to diart, can I directly ask them under this issue?

@thieugiactu sure! Feel free to open a PR too, I'd be glad to discuss possible solutions to this

This is what I've been doing so far. I re-used your code but replaced whisper model with wav2vec2 model for speech recognition since my pc couldn't handle whisper.
Untitled Diagram
The code worked but there are some adjustment could be made:

  • The process takes a really long time since the voicefixer model has to also process either the silence data with no speaker or in a same batch, there would be little to no differences between data.
  • The silence parts at the start and the end of the speech should be trimmed.
    project.zip.
    voicefixer probably need librosa==0.9.2 to run.

@thieugiactu something you could also do to reduce the inference time is to directly record audio at 44.1 khz. This way you avoid having to upsample in the first place

@juanmc2005 thank you for your reply. Unfortunately the voicefixer is so unstable and I couldn't make it work properly. More often than not it would degrade the audio's quality even more.