Error when loading the gru embedder
jaysonph opened this issue · comments
Jayson Ng commented
import torch
model = torch.hub.load('RF5/simple-speaker-embedding', 'gru_embedder')
model.eval()
When I run the above lines, I got the following error:
Using cache found in /root/.cache/torch/hub/RF5_simple-speaker-embedding_master
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-3-f8b105db465a>](https://localhost:8080/#) in <cell line: 4>()
2
3 import torch
----> 4 model = torch.hub.load('RF5/simple-speaker-embedding', 'gru_embedder')
5 model.eval()
6 frames
[~/.cache/torch/hub/RF5_simple-speaker-embedding_master/stft.py](https://localhost:8080/#) in __init__(self, filter_length, hop_length, win_length, window)
33 # get window and zero center pad it to filter_length
34 fft_window = get_window(window, win_length, fftbins=True)
---> 35 fft_window = pad_center(fft_window, filter_length)
36 fft_window = torch.from_numpy(fft_window).float()
37
TypeError: pad_center() takes 1 positional argument but 2 were given
How could I resolve this bug?
Jayson Ng commented
Solved it by
pip install librosa==0.9.2
I think dependencies of the model should be freezed in a requirements.txt
Matthew Baas commented
Will do, thanks for the info.