facebookresearch / AudioDec

An Open-source Streaming High-fidelity Neural Audio Codec

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Relation / comparison to encodec

vadimkantorov opened this issue · comments

Hi! Thanks for the open-source release, along with the training code!

I noticed that the AudioDec paper does not cite High Fidelity Neural Audio Compression (https://github.com/facebookresearch/encodec). I wonder if any comparisons with pretrained encodec were conducted. Or please correct me if I missed something.

Thank you!

Hi!
AudioDec (ICASSP 2023 deadline: 22/10/26) and Encodec (arXiv, published on 22/10/24) were developed by different teams in Meta at almost the same time, so we haven’t compared AudioDec with Encodec. However, since the AudioDec project focuses on human sounds while Encodec focuses on general audio, there are several main differences.

  1. AudioDec only adopts the simplest single-resolution mel loss while Encodec adopts multi-resolution mel losses and waveform-based losses because only simple mel loss already modeled the speech well.
  2. AudioDec has a 2 stages architecture (encoder + vocoder) which is suitable for developing new encoders/decoders for different applications such as denoising or binaural rending.
  3. The autoencoder architecture of AudioDec is almost the same as SoundStream while Encodec’s architecture is a combination of SoundSteam and Demucs.
  4. The currently provided AudioDec pre-trained models were trained using speech-only corpora (VCTK or LibriTTs) while the Encodec models were trained using many different kinds of audio. Since training/testing data mismatch usually causes significant performance degradations for the data-driven models, we recommend you use the AudioDec pre-trained models only for speech and train new AudioDec models for other types of audio signals.

Interesting. Thank you!

Also, the reason I've asked is because projects like NaturalSpeech2 are using neural codec as important part of their TTS pipeline, so it seems that maybe AudioDec would be better suited for replication of such a model!

Yes, since the training script is provided, and the training is not very computation-consuming, it might be easy to train a new AudioDec using a new dataset for a new downstream task.

Maybe the information in this issue would be a great addition for README. I assume, many people would have similar questions (on similarities/differences with encodec/soundstream/lyrav2)

Thanks for the suggestion!
I will add this question to README.