ML-KULeuven / deepseaproblog

The official implementation of DeepSeaProbLog, a neural probabilistic logic programming language supporting discrete and continuous random variables.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

DeepSeaProbLog (Beta)

This is the official repository of DeepSeaProbLog. DeepSeaProbLog is an extension of DeepProbLog that integrates logic-based reasoning with (deep) probabilistic programming in discrete-continuous domains.

This repository is in beta and will be frequently updated. Basic functionality is complete, yet feedback is most welcome.

Requirements

To use DeepSeaProbLog, you need to install the following dependencies:

Tutorial

The unifying concept in DeepSeaProbLog is the neural distributional fact

variable([Inputs], S) ~ distribution(network([Inputs])).

Where [Inputs] is a list of (neural) inputs, network is the identifier of a neural network parametrising distribution and S represents variable. Note that parameters do not need to originate from a neural network and can also be directly put into the distribution as

variable(S) ~ distribution(Parameters).

For example, a two-dimensional standard normal distribution is defined as

normal_var(S) ~ normal([[0, 0], [1, 1]]).

A regular DeepSeaProbLog program then consists of a list of variables followed by a list of logical rules, for example

% Variables
var_1(_, S) ~ distr_1(_).
...
var_n(_, S) ~ distr_n(_).

% Logic
rule_1(_) :-
    var_1(_, V1), var_n(_, Vn), add(V1, Vn, Vsum), smaller_than(0, Vsum).
...
rule_m(_) :- 
    var_1(_, V1), var_n(_, Vn), equals(V1, Vn).

A program should be stored as a .pl file and can be loaded into Python through the Model class. Apart from a .pl file, the Model interface also takes a list of all neural networks used in the program. In total, we load a joint neural-symbolic, probabilistic model in Python as

from model import Model

model = Model("program.pl", networks)

The probability of a query q can be computed directly via model.solve_query(q) .

For training purposes, the query has to be compiled, after which training can occur as traditional in a ML context

model.set_optimizer(Adam(learning_rate=0.001))
model.set_loss(BinaryCrossentropy())
model.compile_query(q)
model.train(data)

Experiments

All experiments from our paper are provided in DeepSeaProbLog/examples, together with a couple of novel tasks.

About

The official implementation of DeepSeaProbLog, a neural probabilistic logic programming language supporting discrete and continuous random variables.

License:Apache License 2.0


Languages

Language:Python 100.0%