gentaiscool / end2end-asr-pytorch

End-to-End Automatic Speech Recognition on PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CUDA out of memory when validation

ArtemisZGL opened this issue · comments

First i want to thanks for your work. And I run you code with dataset llibrispeech train 100 as training data, dev clean as validation data using GeForce RTX 2070. After several time OOM error, i set the barch size to 4 and finally can train normally. But after one epoch, i also met the OOM error in validation. So i want to konw if i set the batch size smaller can avoid this problem ? Because i notice that in the librispeech dataset process script, the training data have been pruned to min/max duration but the validation and test data didn't.
And I also to konw is there a result for librispeech using this code ? I only saw the aishell result in README.
Thanks.

I added with torch.no_grad(): before the validation loop:

for ind in range(len(valid_loader_list)):
, by referencing this documents https://discuss.pytorch.org/t/cuda-error-out-of-memory/28123. I think this reduced the problem for me, but not sure. The test code also contains with torch.no_grad():,
with torch.no_grad():
so it seems somehow worth to try.

@paanguin thanks! i will try this.

thanks @paanguin I am closing the issue