quancore / social-lstm

Social LSTM implementation in PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Loss function computation

ThomasMrY opened this issue · comments

Hi, I have a question.
In training process:"

Forward prop

            outputs, _, _ = net(x_seq, grid_seq, hidden_states, cell_states, PedsList_seq,numPedsList_seq ,dataloader, lookup_seq)

Compute loss

            loss = Gaussian2DLikelihood(outputs, x_seq, PedsList_seq, lookup_seq)"

why the loss is computed by using x_seq and outputs rather than outputs and y_seq?
Thanks

x_seq represents the observed part of a trajectory and y_seq is unknown part (will be predicted). During the training, we are using known (observed) part of a trajectory for the training model and calculating training loss.

commented

@quancore hello. I am also confused of the loss calculation. In that case, do you mean that the model mainly for learning the sequence relationship among the hidden state, so it doesn't matter weather its output was compare with the known (observed) part or unknown part? But will it infect the model performance to predict the unknown situation as it was always trained to predict the known part? Thank you and waiting for your reply.