There are 3 repositories under safe-reinforcement-learning topic.
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
OmniSafe is an infrastructural framework for accelerating SafeRL research.
NeurIPS 2023: Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark
NeurIPS 2023: Safe Policy Optimization: A benchmark repository for safe reinforcement learning algorithms
Open-source reinforcement learning environment for autonomous racing — featured as a conference paper at ICCV 2021 and as the official challenge tracks at both SL4AD@ICML2022 and AI4AD@IJCAI2022. These are the L2R core libraries.
Multi-Agent Constrained Policy Optimisation (MACPO; MAPPO-L).
Reading list for adversarial perspective and robustness in deep reinforcement learning.
The Source code for paper "Optimal Energy System Scheduling Combining Mixed-Integer Programming and Deep Reinforcement Learning". Safe reinforcement learning, energy management
Safe Pontryagin Differentiable Programming (Safe PDP) is a new theoretical and algorithmic safe differentiable framework to solve a broad class of safety-critical learning and control tasks.
Code for "Constrained Variational Policy Optimization for Safe Reinforcement Learning" (ICML 2022)
The Verifiably Safe Reinforcement Learning Framework
Safe Multi-Agent Isaac Gym benchmark for safe multi-agent reinforcement learning research.
[ICLR 2024] The official implementation of "Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model"
Implementations of SAILR, PDO, and CSC
ICLR 2024: SafeDreamer: Safe Reinforcement Learning with World Models
Repository containing the code for the paper "Safe Model-Based Reinforcement Learning using Robust Control Barrier Functions". Specifically, an implementation of SAC + Robust Control Barrier Functions (RCBFs) for safe reinforcement learning in two custom environments
Training (hopefully) safe agents in gridworlds
Implementation of PPO Lagrangian in PyTorch
A Survey Analyzing Generalization in Deep Reinforcement Learning
[Humanoids 2022] Learning Collision-free and Torque-limited Robot Trajectories based on Alternative Safe Behaviors
A Multiplicative Value Function for Safe and Efficient Reinforcement Learning. IROS 2023.
[IROS 22'] Model-free Neural Lyapunov Control
Safe Multi-Agent Robosuite benchmark for safe multi-agent reinforcement learning research.
The proceedings of top conference in 2023 on the topic of Reinforcement Learning (RL), including: AAAI, IJCAI, NeurIPS, ICML, ICLR, ICRA, AAMAS and more.
Code for L4DC 2022 paper: Joint Synthesis of Safety Certificate and Safe Control Policy Using Constrained Reinforcement Learning.
Safe Multi-Agent Reinforcement Learning to Make decisions in Autonomous Driving
Towards Safe Reinforcement Learning via Constraining Conditional Value at Risk (IJCAI 2022)
Reinforcement Learning Course Project - IIT Bombay Fall 2018
Safe Policy Optimization with Local Features
Code for the paper Learning Deep Energy Shaping Policies for Stability-Guaranteed Manipulation, IEEE RA-L, 2021
The proceedings of top conference in 2018 on the topic of Reinforcement Learning (RL), including: AAAI, IJCAI, NeurIPS, ICML, ICLR, ICRA, AAMAS and more.
A safety-aware human-in-the-loop Reinforcment Learning (SaHiL-RL) approach for end-to-end autonomous driving.
A Safety-Guaranteed Learning Algorithm for Voltage Regulation in Active Distribution Networks