Michael Lu (sudo-michael)

sudo-michael

Geek Repo

Location:Vancouver, BC

Home Page:https://sudo-michael.github.io/

Twitter:@sudo_mlu

Github PK Tool:Github PK Tool

Michael Lu's repositories

Language:PythonLicense:MITStargazers:1Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0
Language:MATLABStargazers:0Issues:0Issues:0

ciff

Cornell Instruction Following Framework

Language:PythonLicense:GPL-3.0Stargazers:0Issues:0Issues:0

cleanrl

High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

cogail

Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

cpo-pytorch

An implementation of Constrained Policy Optimization (Achiam 2017) in PyTorch

Language:PythonStargazers:0Issues:0Issues:0
Language:PythonLicense:MITStargazers:0Issues:0Issues:0

event-jekyll-theme

Jekyll Theme package for your event

Language:HTMLLicense:GPL-3.0Stargazers:0Issues:0Issues:0

gail-airl-ppo.pytorch

A PyTorch implementation of GAIL and AIRL based on PPO.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0
Language:MATLABStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0
Language:SCSSLicense:MITStargazers:0Issues:0Issues:0

omnisafe

OmniSafe is an infrastructural framework for accelerating SafeRL research.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

optimized_dp

Optimizing Dynamic Programming-Based Algorithms

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

proactive_interventions

Codebase for NeurIPS 2022 paper, "When to Ask for Help: Proactive Interventions in Autonomous Reinforcement Learning"

Stargazers:0Issues:0Issues:0

pytorch-soft-actor-critic

PyTorch implementation of soft actor critic

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

recovery-rl

Implementation of Recovery RL: Safe Reinforcement Learning With Learned Recovery Zones.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

rl-baselines3-zoo

A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.

License:MITStargazers:0Issues:0Issues:0

rl-starter-files

RL starter files in order to immediatly train, visualize and evaluate an agent without writing any line of code

License:MITStargazers:0Issues:0Issues:0

robopianist

🎹 🤖 A benchmark for high-dimensional robot control.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

safe-control-gym

PyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and reinforcement learning

License:MITStargazers:0Issues:0Issues:0

Safe-MBPO

Code for the NeurIPS 2021 paper "Safe Reinforcement Learning by Imagining the Near Future"

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

safety-gym

Tools for accelerating safe exploration research.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

safety-gymnasium

Safety-Gymnaisum is a highly scalable and customizable safe reinforcement learning environment library.

License:Apache-2.0Stargazers:0Issues:0Issues:0
Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

sbx

SBX: Stable Baselines Jax (SB3 + Jax)

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

siren-jax

Unofficial implementation of Siren with Jax for image representation.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

stable-baselines3

PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0