Subtle issue in generating batches
alanramponi opened this issue · comments
I accidentally found that in the current MaChAmp version some data instances are not actually processed during training. This could potentially lead to an underestimation of the performance across all tasks. Specifically, I found that num_batches
examples are not included during training. As a result, the smaller the batch size is, the largest the number of examples that are ignored is - a bugfix here is likely to improve performance here and there.
[This issue has been created to alert users of the recommended new version]