rikdz / GraphWriter

Code for "Text Generation from Knowledge Graphs with Graph Transformers"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Use of ELMO in the file models/list_encoder.py

vnik18 opened this issue · comments

Hi, thank you for sharing the code!

Can you please explain how the ELMO embeddings are being used in the following lines in models/list_encoder.py?

`

learned_emb = self.lemb(l)

learned_emb = self.input_drop(learned_emb)

if self.use_elmo:

  elmo_emb = self.elmo(l,word_inputs=l)

  e = torch.cat((elmo_emb['elmo_representations'][0],learned_emb),2)

else:
  e = learned_emb

`

ELMo support is not included in the current repository. If you do want to use ELMo, you will need to pass a list of vocabulary items to cache to the lseq_encode constructor, as well as give a path to model weights in pargs.py (uncomment and modify lines).

Hi, yes I have done all of that and am able to get the ELMo embeddings to work. My question is regarding how exactly they are being used in the code.

Are the ELMo embeddings being used in place of the Embedding layer and the Bi-LSTM is on top of them, to produce an encoding for the title?

The elmo embeddings replace learned word embeddings here and were used to encode the entities in the knowledge graph. I wasn't able to get them to work tho.