guilherme-pombo / PhilosophyLSTM

Can a LSTM philosophise? Training a LSTM on Plato's The Republic

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

PhilosophyLSTM

Can a LSTM philosophise? Answer: Not really haha. Training a LSTM on Plato's "The Republic". Inspired after reading Andrej Karpathy's arcticle on LSTMs http://karpathy.github.io/2015/05/21/rnn-effectiveness/ . Basically what we're training is a character-level language model. "We feed the LSTM a chunk of text and ask it to model the probability distribution of the next character in the sequence given a sequence of previous characters. This will then allow us to generate new text one character at a time."

Notes

  • Run on GPU, otherwise it takes ages
  • More epochs when training the LSTM => more congruent sentences produced (only tried up to 30 epochs, AWS is expensive hehe)
  • Hyperparemeters of Word2Vec and LSTM were not really tuned since that would be quite costly (I used parameters/structure that were common in literature)
  • Pro Tip: I used AWS, but Google Cloud is cheaper :P

Running

First run

create_word2vec.py

this will create the word2vec vectors for the text, which will be stored in vectors.bin. Then run:

lstm_trainer_generator.py

To both train the LSTM and then generate sample sentences. If you want only to train the LSTM use the method train_model(), this method saves the weights to a file called "lstm-weights". If you only want to generate sentences, you need an already pre-trained LSTM (i.e. the weights file) and then use generate_sentences() method. To change the structure of the LSTM edit the file:

lstm_model.py

About

Can a LSTM philosophise? Training a LSTM on Plato's The Republic

License:MIT License


Languages

Language:Python 100.0%