JinheonBaek / GEN

Official Code Repository for the paper "Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Prediction" (NeurIPS 2020)

Home Page:https://arxiv.org/abs/2006.06648

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

transductive and inductive set up in out of graph entiites

neerajkuhike opened this issue · comments

Hi,

Your work is very interesting. Can you explain to me the inductive and transductive setup of data while training and testing? How they are different in your setup in terms of dataset preparation for training and testing.

Hi,
Thank you for your interest.

Dataset preparation of both inductive and transductive setups is the same for a fair evaluation. The only difference between the two is the training scheme. While the inductive method only considers one (unseen) to few (seen and unseen) mappings where the unseen has no embedding (as in the description of the paper "treating them as noises or ignoring them as zero vectors like a previous inductive scheme"), the transductive method considers relationships between unseen entities since they have now embeddings from the bottom inductive method.

At the code level of the transductive method, it first generates embeddings of unseen entities inductively (https://github.com/JinheonBaek/GEN/blob/main/GEN-KG/trainer_trans.py#L152), and then further generates embeddings of unseen entities using the previously obtained inductive embeddings with seen with the transductive scheme (https://github.com/JinheonBaek/GEN/blob/main/GEN-KG/trainer_trans.py#L169).

If you have more questions, then feel free to ask.

Sincerely,
Jinheon Baek