Senne Deproost (SenneDeproost)

SenneDeproost

Geek Repo

Company:Siemens

Location:European Union

Home Page:www.sennedeproost.cf

Github PK Tool:Github PK Tool

Senne Deproost's starred repositories

keras

Deep Learning for humans

Language:PythonLicense:Apache-2.0Stargazers:61347Issues:1914Issues:12000

skynet

A lightweight online game framework

torch7

http://torch.ch

Language:CLicense:NOASSERTIONStargazers:8962Issues:625Issues:733

awful-ai

😈Awful AI is a curated list to track current scary usages of AI - hoping to raise awareness

metaseq

Repo for external large-scale work

Language:PythonLicense:MITStargazers:6437Issues:110Issues:292

agents

TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.

Language:PythonLicense:Apache-2.0Stargazers:2761Issues:80Issues:659

singularity

Singularity has been renamed to Apptainer as part of us moving the project to the Linux Foundation. This repo has been persisted as a snapshot right before the changes.

Language:GoLicense:NOASSERTIONStargazers:2514Issues:89Issues:3256

Pearl

A Production-ready Reinforcement Learning AI Agent Library brought by the Applied Reinforcement Learning team at Meta.

Language:Jupyter NotebookLicense:MITStargazers:2439Issues:32Issues:53

chainerrl

ChainerRL is a deep reinforcement learning library built on top of Chainer.

Language:PythonLicense:MITStargazers:1159Issues:71Issues:198

procgen

Procgen Benchmark: Procedurally-Generated Game-Like Gym-Environments

Language:C++License:MITStargazers:992Issues:145Issues:75

gpubootcamp

This repository consists for gpu bootcamp material for HPC and AI

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:501Issues:23Issues:62

deer

DEEp Reinforcement learning framework

Language:PythonLicense:NOASSERTIONStargazers:485Issues:50Issues:32

mtrl

Multi Task RL Baselines

Language:PythonLicense:MITStargazers:222Issues:10Issues:29

awesome-explainable-reinforcement-learning

A Survey on Explainable Reinforcement Learning: Concepts, Algorithms, Challenges

ballmer-peak

A website that creates a schedule for those attempting to climb the ballmer peak

Language:JavaScriptLicense:MITStargazers:82Issues:5Issues:3

sindy-rl

Code for "SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning" by Zolman et al.

ODINN.jl

Global glacier model using Universal Differential Equations for climate-glacier interactions

Language:JuliaLicense:MITStargazers:67Issues:5Issues:67

seals

Benchmark environments for reward modelling and imitation learning algorithms.

Language:PythonLicense:MITStargazers:44Issues:10Issues:14

PreviewCode

QuickLook source code preview and icon thumbnailing app extensions for macOS Catalina and beyond

Language:SwiftLicense:MITStargazers:39Issues:3Issues:10

leaps

Code for Learning to Synthesize Programs as Interpretable and Generalizable Policies in NeurIPS 2021

Language:PythonLicense:MITStargazers:28Issues:6Issues:1

Interpretable_DDTS_AISTATS2020

Public code for implementation and experiments with differentiable decision trees.

Language:PythonLicense:MITStargazers:25Issues:4Issues:0

openhps-core

OpenHPS: Core Component

Language:TypeScriptLicense:Apache-2.0Stargazers:23Issues:1Issues:20

Interactive-Multi-objective-Reinforcement-Learning

Multi-objective reinforcement learning deals with finding policies for tasks where there are multiple distinct criteria to optimize for. Since there may be trade-offs between the criteria, there does not necessarily exist a globally best policy; instead, the goal is to find Pareto optimal policies that are the best for certain preference functions. The Pareto Q-learning algorithm looks for all Pareto optimal policies at the same time. Introduced a variant of Pareto Q-learning that asks queries to a user, who is assumed to have an underlying preference function and also the scalarized Q-learning algorithm which reduces the dimensionality of multi-objective space by using scalarization function and ask user preferences by taking weights for scalarization. The goal is to find the optimal policy for that user’s preference function as quickly as possible. Used two benchmark problems i.e. Deep Sea Treasure and Resource Collection for experiments.

Language:PythonStargazers:19Issues:0Issues:0

PiRL

Programmatically Interpretable Reinforcement Learning

Language:PythonLicense:MITStargazers:16Issues:3Issues:2

EGGP

A public repository for Evolving Graphs by Graph Programming

Language:CStargazers:7Issues:6Issues:0
Language:C++License:GPL-3.0Stargazers:5Issues:0Issues:0

shepherd

Distributed Reinforcement Learning over HTTP

Language:PythonLicense:NOASSERTIONStargazers:5Issues:4Issues:0

Picto

Picto2Text and Text2Picto

Language:PerlStargazers:4Issues:1Issues:0

smol-strats

Synthesizing compact strategies for MDPs specified in the PRISM syntax

Language:PythonLicense:GPL-3.0Stargazers:2Issues:0Issues:0

DriViDOC

Driving from Vision through Differentiable Optimal Control