question about the pre-train glove you used
YaNjIeE opened this issue · comments
ReDsUn commented
Hi, @svjan5
Sorry to bother you again.
I have read your paper and noticed you used Glove, is the Glove pre-trained in the NYT dataset? And I also checked your code and I found you update the word embedding during training. I'am confused about the update of the word embedding, why not freeze it during training? Intuitively, freezing the pretrained word embedding is better way to represent the word semantic information because your have trained each words and gain the accurate representation.
Looking forward to your reply.
Best :^)
Shikhar Vashishth commented