b04901014 / FT-w2v2-ser

Official implementation for the paper Exploring Wav2vec 2.0 fine-tuning for improved speech emotion recognition

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

run_downstream_custom_multiple_fold.py CUDA out of memory

zxpan opened this issue · comments

Got the following when running run_downstream_custom_multiple_fold.py
RuntimeError: CUDA out of memory. Tried to allocate 730.00 MiB (GPU 0; 23.70 GiB total capacity; 21.65 GiB already allocated; 426.81 MiB free; 21.81 GiB reserved in total by PyTorch)

I have NVIDIA GeForce RTX 3090 with 24GB.

Any insights on how to workaround it?

me tooo...... I think we have to use multi-gpu

@zxpan You can reduce the batch size from 64 to 32.