jcjohnson / torch-rnn

Efficient, reusable RNNs and LSTMs for torch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is my training running?

Quambus opened this issue · comments

Hi everyone, I'm a complete noob with Python/Torch, but I thought this whole thing was really cool and decided to give it a try. I've managed to process my data and install Torch with all the other plugins. However, when I try train.lua and then sample.lua, its giving me nothing. My file name was a.h5 and a.json. Could anyone please help? Thanks in advance.

image

Two things I'd check:

  1. Is your system doing something? If you are using CUDA then you can check nvidia-smi, else use top.
  2. Does it work if you pre-process and train on the included data/tiny-shakespeare.txt dataset?

I'm not using CUDA, and I processed the Shakespeare data set, but when I try to sample it, its still not giving me anything. Thanks again

Did you train on Shakespeare before sampling?

I tried, and this is what I'm getting

image

You're typing the commands into Torch's shell. You need to type them into your OS's shell.

Ah, I'm doing some janky setup where I'm running Bash on Ubuntu since I have a PC. Could that be the problem?

No, bash is the standard on ubuntu and I don't know what you would be running it on if you didn't have a PC. Just instead of writing "th" to start torch, you write "th train.lua -input_h5 " etc.

Oh okay, thanks! I tried that and got this error:

image

I think I'm in the wrong directory? How do I get into the right directory? Also, is there a specific place where I need to put my training files? Thanks

Do ls and see which directory you have torch-rnn in (Probably "torch-rnn").
Then, just cd torch-rnn or whatever directory it was.
Now try training. train.lua is in the directory you just changed to.

Alternatively, you can append torch-rnn/(Or whichever directory it was) in front of train.lua, like this:
th torch-rnn/train.lua -input_h5 tiny-shakespeare.h5 -input_json tiny-shakespeare.json

Whichever method works best for you.