cindy17xn / Neural-LP

Differentiable Learning of Logical Rules for Knowledge Base Reasoning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Neural LP

This is the implementation of Neural Logic Programming, proposed in the following paper:

Differentiable Learning of Logical Rules for Knowledge Base Reasoning. Fan Yang, Zhilin Yang, William W. Cohen. NIPS 2017.

Requirements

  • Python 2.7
  • Numpy
  • Tensorflow 1.0.1

Quick start

The following command starts training a dataset about family relations, and stores the experiment results in the folder exps/demo/.

python src/main.py --datadir=datasets/family --exps_dir=exps/ --exp_name=demo

Wait for around 8 minutes, navigate to exps/demo/, there is rules.txt that contains learned logical rules.

Evaluation

To evaluate the prediction results, follow the steps below. The first two steps is preparation so that we can compute filtered ranks (see TransE for details).

We use the experiment from Quick Start as an example. Change the folder names (datasets/family, exps/dev) for other experiments.

. eval/collect_all_facts.sh datasets/family
python eval/get_truths.py datasets/family
python eval/evaluate.py --preds=exps/demo/test_predictions.txt --truths=datasets/family/truths.pckl

About

Differentiable Learning of Logical Rules for Knowledge Base Reasoning

License:MIT License


Languages

Language:Python 99.9%Language:Shell 0.1%