khundman / telemanom

A framework for using LSTMs to detect anomalies in multivariate time series data. Includes spacecraft anomaly data and experiments from the Mars Science Laboratory and SMAP missions.

Home Page:https://arxiv.org/abs/1802.04431

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Question]Changes required to run LSTM on GPU(CUDA)

abhishekms1047 opened this issue · comments

I'm trying to run LSTM on GPU ,Since it is an RNN it is not having very good performance when compared to running it on CPU.
Can you please suggest on changes that has to be made with respect to keras libraries for LSTM?

What I Tried:
I used keras.layers.CuDNNLSTM. There was significant improvement in time taken, but it was running only one LSTM network at a time with 30% GPU usage(NVIDIA Tesla Dual GPU Kepler K80 Graphics).

Question :

  1. Using multiple GPUs,Will i be able to run multiple neural networks at a time?

  2. Is there a way to allocate half no. of core of GPU to one neural network and other half to another and so on.?

  1. Yes, you can start multiple processes to train different sets of networks and assign them to different GPU cards if you have multiple available.

  2. As far as I'm aware you can't do this.