There are 13 repositories under neuro-symbolic-learning topic.
PyTorch implementation for the Neuro-Symbolic Concept Learner (NS-CL).
Implementation for the Neural Logic Machines (NLM).
A collection of papers of neural-symbolic AI (mainly focus on NLP applications)
AIKA is a new type of artificial neural network designed to more closely mimic the behavior of a biological brain and to bridge the gap to classical AI. A key design decision in the Aika network is to conceptually separate the activations from their neurons, meaning that there are two separate graphs. One graph consisting of neurons and synapses representing the knowledge the network has already acquired and another graph consisting of activations and links describing the information the network was able to infer about a concrete input data set. There is a one-to-many relation between the neurons and the activations. For example, there might be a neuron representing a word or a specific meaning of a word, but there might be several activations of this neuron, each representing an occurrence of this word within the input data set. A consequence of this decision is that we have to give up on the idea of a fixed layered topology for the network, since the sequence in which the activations are fired depends on the input data set. Within the activation network, each activation is grounded within the input data set, even if there are several activations in between. This means links between activations serve two purposes. On the one hand, they are used to sum up the synapse weights and, on the other hand they propagate the identity to higher level activations.
Neuro-Symbolic Visual Question Answering on Sort-of-CLEVR using PyTorch
An efficient Python toolkit for Abductive Learning (ABL), a novel paradigm that integrates machine learning and logical reasoning in a unified framework.
Usable implementation of Emerging Symbol Binding Network (ESBN), in Pytorch
Lernd is ∂ILP (dILP) framework implementation based on Deepmind's paper Learning Explanatory Rules from Noisy Data.
Holographic Reduced Representations
Tree Stack Memory Units
An attempt to merge ESBN with Transformers, to endow Transformers with the ability to emergently bind symbols
A novel approach to learning concept embeddings and approximate reasoning in ALC knowledge bases with deep neural networks
BotGNN: Inclusion of Domain-Knowledge into GNNs using Mode-Directed Inverse Entailment
Vertex-Enriched Graph Neural Network (VEGNN)
The official repository for the PSYCHIC model
Implementation of a straight-through gradient wrapper to allow for discrete latent representations. Provides binary discretizer which maps hidden representations to {0, 1} and a learnable multi-value discretizer, which maps hidden activations to their closest value in a set of given size.
Master's thesis : Knowledge Inference and Knowledge Completion Methods using Neuro-Symbolic Inductive Rules