quancore / social-lstm

Social LSTM implementation in PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Prediction not autoregressive?

Binbose opened this issue · comments

During sampling of future trajectories, it seems like the the ground truth is fed back into the network, not the predicted point, so the network is not autoregressive. Is that correct? Below is the code line I am referring to.

out_, hidden_states, cell_states = net(x_seq[tstep].view(1, numx_seq, 2), [grid[tstep]], hidden_states, cell_states, [Pedlist[tstep]], [num_pedlist[tstep]], dataloader, look_up)

For the validation sampling, yes this is the case. I was using the ground truth for prediction because of availability. However, you can change autoregressive manner as well.

If I change it to a autoregressive behavior (basically just exchanging x_seq with ret_x_seq), the output becomes very unstable though and doesn't really makes sense anymore (training and running everything with your default settings). Were you able to achieve good results in an autoregressive mode?

It should work as well but worse than the prediction from ground truth. If the model is simple and unable to predict accurately, it can accumulate the error and give a shitty sequence. But eventually (consecutive epochs), it should give better sequences because of increasing accuracy.

As you can see here:

for tstep in range(args.obs_length-1, args.pred_length + args.obs_length-1):

In the test script, we are using autoregression for prediction of an unobserved part and it works. The process is very similar to it.

Hm, I trained it with the default settings (batch size 5 epochs 30), and it seems to not really work. Are the number of epochs in the default setting sufficient for you to get good results, or do I have to increase them?

I am unable to reproduce the training session and I cannot decide to be working or not. However, default parameters should work for a good training. If not you can increase it.