JinheonBaek / GEN

Official Code Repository for the paper "Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Prediction" (NeurIPS 2020)

Home Page:https://arxiv.org/abs/2006.06648

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question about pre-trained embedding setting

Jasper-Wu opened this issue · comments

Hi,

thanks for your great work first!

I have a question about the pre-trained embedding generated with Distmult. Do we need to ignore "unseen entities" when we pre-train, i.e., remove all the triples which contain unseen entities? Or we just put the whole KG into the pre-training process, and then mask them in inductive/transductive training?

Thanks!

Hi,

Thank you for your interest in our work.

When we pre-train the embedding of seen entities with DistMult,
we totally ignore unseen entities, in other words, remove all triplets which contain unseen entities.

Best regards,
Jinheon Baek