ryonakamura / parlai_agents

# ParlAI Agent examples with PyTorch, Chainer and TensorFlow

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

seq2seq with Pytorch cause GPU out of memory

opened this issue · comments

Hello, when I train the example seq2seq model on ubuntu dataset on Parlai, GPU is out of memory after training on a few thousand of examples. Do you know how to solve this problem? I saw similar issues posted before. I think it is related to Pytorch.

Hello, @ZixuanLiang
I am sorry, but the example implementations currently guarantees operation only with bAbI task.
In other tasks, if the number of vocabulary is too large, the matrix of embedding layer and softmax layer becomes huge, causing out of memory. It is necessary to reduce vocabulary by converting low frequency words to unknown. Unfortunately ParlAI does not implement this feature. I plan to implement a dictionary agent (eg dict-minfreq, subword, sentencepiece) to solve this problem soon.
Also, in case of seq2seq, if the input sentence is too long, the number of LSTM cells to be stored in memory increases, which causes out of memory. Simply it may be possible to avoid by reducing the hidden size.
Thank you!