malllabiisc / RESIDE

EMNLP 2018: RESIDE: Improving Distantly-Supervised Neural Relation Extraction using Side Information

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

question about the pre-train glove you used

YaNjIeE opened this issue · comments

Hi, @svjan5
Sorry to bother you again.
I have read your paper and noticed you used Glove, is the Glove pre-trained in the NYT dataset? And I also checked your code and I found you update the word embedding during training. I'am confused about the update of the word embedding, why not freeze it during training? Intuitively, freezing the pretrained word embedding is better way to represent the word semantic information because your have trained each words and gain the accurate representation.

Looking forward to your reply.
Best :^)

Hi @yanjieg,
No, I have used the original pre-trained GloVe embeddings from here. In my present implementation, I allow the word embeddings to get tuned according to the downstream task. Freezing them is also a valid option but I didn't try with that.