taolei87 / rcnn

Recurrent & convolutional neural network modules

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error while running main.py

dhruvjain opened this issue · comments

Hi I get this error while running main.py with specified data mentioned in the directory.Any idea how to resolve this?I am running in my own laptop having 8GB RAM

MemoryError: failed to alloc sm output
Apply node that caused the error: CrossentropySoftmaxArgmax1HotWithBias(Dot22.0, b, Reshape{1}.0)
Toposort index: 360
Inputs types: [TensorType(float64, matrix), TensorType(float64, vector), TensorType(int32, vector)]
Inputs shapes: [(5888, 100410), (100410,), (5888,)]
Inputs strides: [(803280, 8), (8,), (4,)]
Inputs values: ['not shown', 'not shown', 'not shown']
Outputs clients: [[Reshape{2}(CrossentropySoftmaxArgmax1HotWithBias.0, MakeVector{dtype='int64'}.0)], [CrossentropySoftmax1HotWithBiasDx(Reshape{1}.0, CrossentropySoftmaxArgmax1HotWithBias.1, Reshape{1}.0)], []]

HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.

Hi, seems you are running the language model training. I guess the last softmax layer uses too much memory since it has to map each hidden state into a vector of size equal to the number of unique words.

You could try reduce the batch size (--batch or --batch_size depending on which code you are running), reduce the hidden dimension (-d).