lmb-freiburg / Multimodal-Future-Prediction

The official repository for the CVPR 2019 paper "Overcoming Limitations of Mixture Density Networks: A Sampling and Fitting Framework for Multimodal Future Prediction"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question about training the fitting stage

Shaluols opened this issue · comments

Hi,

I want to ask could I use the make_graph function in the net.py file to build the optimizers for sampling and fitting training stages? I encountered circled graph errors when I did this. I first saved the trained sampling model, then load the weights and try to continue to train the fitting network. Here is my pipeline:
image
image

The errors happened when it runs to _, loss_value = session.run([fit_train_op, fit_loss_op]), the errors are:
image
...
image

If I ignore these errors, the loop can continue but the loss_value keeps almost constant. I think the way I define the graph and optimizer is not correct. Could you give me some guidance on this kind of problem? Thanks in advance!

Hi,

The first thing I encounter is what you have at line 80:
you need to use the bounded_log_sigmas instead of sigmas, so you need to change the second argument from output[1] to output[3]

Moreover, you need to ensure that when training the fitting network that the gradient is not passing backward to the sampling network. You can ensure that by adding stop_gradient on input_2 in net.py (make_graph() at line 43)

Note that during training the fitting, the NLL loss usually does not decrease much but it should not be always constant. Maybe you can share the fitting loss plot.

Depending on your application, you need to check the quality of the hypotheses from the sampling network to see if they are good. If not, the fitting network won't be able to fit a mixture model to bad hypotheses.

Hope this helps,

Thanks for your findings, the change of output[1] to output[3] solved the constant fitting loss problem. I also added: input_2 = tf.stop_gradient(input_2) before the 'net2' layers, but there is not much difference on the loss compared to not using the stop_gradient. However, the errors are still there, I will train it with more data and epochs to see if these errors affect the performance.

I have not visualized the result of the hypothesis but will do it later. My training loss of the hypothesis is around 11 after 500 epochs of training on 10k training pairs.

I think this can be closed for now. Please feel free to reopen it if you have encounter other problems.