why is the NT F1 exactly the same as F1 score running this code?
ZhichaoDuan opened this issue · comments
I cloned this repo and try to replicate the results and I found that each time I evaluate the model performance after training, the NT f1 score and the f1 score are the exact same which is odd..
There existed such a bug, but I thought it was fixed. I haven't encountered it since I updated the code.
Did you try the bert version?
Yeah, I used the bert version but encountered this problem. As for the glove version, I haven't tested yet.