maum-ai / voicefilter

Unofficial PyTorch implementation of Google AI's VoiceFilter system

Home Page:http://swpark.me/voicefilter

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question about start point of SDR

lycox1 opened this issue · comments

Dear @seungwonpark

First of all, I would like to thank you for great open source.
I would like to test your nice code and I tried to train voice filter.

But i get the problem with SDR. When i saw SDR graph in voicefilter github,
SDR value from 2 to 10dB. But in my case, SDR value is from -0.8 to 1.2.

image

I am trying to find the cause of the problem but I can not find it.

Can you help me to find the cause of problem?

I used the default yaml and generator.py. ( train-clean-100, train-clean-360, dev-clean are used
to train)

Could you let me know what i can check?

Thanks you!

Hi, @lycox1
Thanks for your interest in VoiceFilter open-source repo.

As discussed in #5, SDR may significantly differ from results in README since it's measured from the random sample. Please refer to Jungwon Seo's comment here: #5 (comment)

Thanks @seungwonpark
I already read #5.

I think that key checkpoint of #5 are below

  1. train-other-500 don't use for training. Just use train-clean-100 and train-clean-360
    --> I use train-clean-100, train-clean-360 and dev-clean
  2. comparing to the published sample. (origianl paper's sample https://google.github.io/speaker-id/publications/VoiceFilter/).
    --> I checked the dev_tuples.csv and train_tuples.csv (https://github.com/google/speaker-id/tree/master/publications/VoiceFilter/dataset/LibriSpeech). Files in dev-clean are exist in dev_tuples.csv but files in train-clean-100 and train-clean-360 don't exists in dev_tuples.csv and train_tuples.csv.

Could plz let me know if you have any other clue!

Thanks.

Hello @seungwonpark , I also get the similar problem with @lycox1 . Could you please give me a hand?
I almost follow all the README steps, except that the suffix of audios in LibriSpeech is .flac, so I changed 24th line of normalize-resample.sh from
"for f in $(find . -name ".wav"); do"
to
"for f in $(find . -name "
.flac"); do".

Since I clone down the newest code, train-other-500 has been removed. By the way, I notice that in README the number of test cases is 1000, while the code use only 100 test cases.

Here are the images of the training loss, test loss and test SDR in my experiment. Although the test data may be different, I believe that a correct training loss curve should be similar, right?
image
image
image

Hi, @lawlict

The test loss curve may fluctuate since we didn't perform the evaluation for a sufficient amount of data. So I think the curve may look bit different.