Inconsistent Validation results during training and by using allennlp evaluate
shmahaj opened this issue · comments
Shweti Mahajan commented
I am fine-tuning a model and the validation accuracy I get during training time using allennlp train(85%) is higher than what I get when I run allennlp evaluate(42%) on the saved model and the validation set. Could you please suggest what could be going wrong here?
github-actions commented
This issue is being closed due to lack of activity. If you think it still needs to be addressed, please comment on this thread 👇