How can the model be trained without sequences being padded?
boredtylin opened this issue · comments
boredtylin commented
I have trouble understanding this fitting process:
for i in range(sample_size):
history = model.fit([l_Qs[i], pos_l_Ds[i]] + [neg_l_Ds[j][i] for j in range(J)], y, epochs = 1, verbose = 0)
where each of the training sample goes through the network only once. I don't think it is applicable.
On the other hand, I don't think using a padded input is applicable, either. It just doesn't match the original method. Could someone give me some advice on how to deal with variable inputs (in the paper)?
Michael A. Alcorn commented
You can pass each training sample through the network as many times as you want.