hechmik / voxceleb_enrichment_age_gender

Code and data repository for paper "VoxCeleb enrichment for Age and Gender recognition" submitted at ASRU 2021

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Trying ivect_log_reg_model.torch

Mlallena opened this issue · comments

I am trying to use the gender recognition model shown here ('ivec_log_reg_model.torch'), but the method suggested runs into an error:

Traceback (most recent call last):
  File "test.py", line 4, in <module>
    model.load_state_dict(torch.load('../best_models/ivec_log_reg_model.torch'))
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LogisticRegression:
        size mismatch for linear.weight: copying a param with shape torch.Size([2, 400]) from checkpoint, the shape in current model is torch.Size([1, 512]).
        size mismatch for linear.bias: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([1]).

Replacing (512,1) with (400,2) in the example does seem to work. Now, the problem is that there's no mention of how to test it with your own audios. I'll see if I can find it, but any suggestion would be welcome.

Hi @Mlallena,

It makes sense that the model throws that error as it expects an I-vector whose dimensions are the one you mentioned. More info on how we computed them can be found in section 2 of this Readme file https://github.com/hechmik/voxceleb_enrichment_age_gender/blob/main/notebooks/README.md.

In order to use my own audios in order to check and/or finetune your model, what would I have to do with the audios? Do I need to obtain their MFCC file? Is there a method to directly input the audio filepath so the internal process obtains the input for the model?

Thanks for your previous answer. As I said earlier, I'll try to find more, but an answer is welcome.

Sorry for the delay in the response but I was at work and I didn't have time to get back to you until now. Basically the procedure you need to follow is the following:

  • Compute MFCCs for your recordings, using Kaldi
  • Compute the i-vectors for your recordings, using the "ivector-extractor"
  • Pass these i-vectors to the pre-trained model you already tried

As said in the README.md file we used the Asvtorch tool for doing all these steps, as it was the easiest option for processing Voxceleb recordings. In your scenario you'll need to modify this library a little bit, however I didn't have the chance to do it as we always worked inside the "VoxCeleb" ecosystem.

A good starting point is the description of the actual steps needed for computing i-vectors, which you can find here. The solution proposed in our paper is "Voxceleb"-dependent, as we used the not-labeled records for training the various extractors: in my opinion you could replicate the other steps also on other datasets, even though results won't likely be the same.

I hope I was clear enough!