There are 3 repositories under safe-reinforcement-learning topic.
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
OmniSafe is an infrastructural framework for accelerating SafeRL research.
NeurIPS 2023: Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark
NeurIPS 2023: Safe Policy Optimization: A benchmark repository for safe reinforcement learning algorithms
Open-source reinforcement learning environment for autonomous racing — featured as a conference paper at ICCV 2021 and as the official challenge tracks at both SL4AD@ICML2022 and AI4AD@IJCAI2022. These are the L2R core libraries.
Multi-Agent Constrained Policy Optimisation (MACPO; MAPPO-L).
Reading list for adversarial perspective and robustness in deep reinforcement learning.
Safe Pontryagin Differentiable Programming (Safe PDP) is a new theoretical and algorithmic safe differentiable framework to solve a broad class of safety-critical learning and control tasks.
The Verifiably Safe Reinforcement Learning Framework
Code for "Constrained Variational Policy Optimization for Safe Reinforcement Learning" (ICML 2022)
The Source code for paper "Optimal Energy System Scheduling Combining Mixed-Integer Programming and Deep Reinforcement Learning". Safe reinforcement learning, energy management
Safe Multi-Agent Isaac Gym benchmark for safe multi-agent reinforcement learning research.
[ICLR 2024] The official implementation of "Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model"
Implementations of SAILR, PDO, and CSC
Training (hopefully) safe agents in gridworlds
Repository containing the code for the paper "Safe Model-Based Reinforcement Learning using Robust Control Barrier Functions". Specifically, an implementation of SAC + Robust Control Barrier Functions (RCBFs) for safe reinforcement learning in two custom environments
A Survey Analyzing Generalization in Deep Reinforcement Learning
Implementation of PPO Lagrangian in PyTorch
A Multiplicative Value Function for Safe and Efficient Reinforcement Learning. IROS 2023.
[Humanoids 2022] Learning Collision-free and Torque-limited Robot Trajectories based on Alternative Safe Behaviors
ICLR 2024: SafeDreamer: Safe Reinforcement Learning with World Models
[IROS 22'] Model-free Neural Lyapunov Control
Safe Multi-Agent Robosuite benchmark for safe multi-agent reinforcement learning research.
The proceedings of top conference in 2023 on the topic of Reinforcement Learning (RL), including: AAAI, IJCAI, NeurIPS, ICML, ICLR, ICRA, AAMAS and more.
Code for L4DC 2022 paper: Joint Synthesis of Safety Certificate and Safe Control Policy Using Constrained Reinforcement Learning.
Towards Safe Reinforcement Learning via Constraining Conditional Value at Risk (IJCAI 2022)
Reinforcement Learning Course Project - IIT Bombay Fall 2018
Safe Policy Optimization with Local Features
Safe Multi-Agent Reinforcement Learning to Make decisions in Autonomous Driving
Code for the paper Learning Deep Energy Shaping Policies for Stability-Guaranteed Manipulation, IEEE RA-L, 2021
The proceedings of top conference in 2018 on the topic of Reinforcement Learning (RL), including: AAAI, IJCAI, NeurIPS, ICML, ICLR, ICRA, AAMAS and more.
Poster about Curriculum Induction for Safe Reinforcement Learning
Code for the paper Stability-guaranteed reinforcement learning for contact-rich manipulation, IEEE RA-L, 2020.