yxuansu / TaCL

[NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning

Home Page:https://arxiv.org/abs/2111.04198

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How can i extract word-embedding

mathshangw opened this issue · comments

sorry for the late reply , i need to extract word-embedding

Originally posted by @mathshangw in #6 (comment)

sorry for the late reply , i need to extract word-embedding

Originally posted by @mathshangw in #6 (comment)

Hi,

You can refer to this line to extract the features from TaCL. The attention_hidden_states[0] is the word embedding that you want.

Thanks for replying and sorry for my late reply. i get that BERT work on sub-words basically but my problem is to get the word-embedding how it could be calculated or does the line you mentioned is for word, not sub-word embedding , please?