azzeddineCH / Highway-PPO-Agent-in-Jax

a jax implementation of highway environment PPO agent with discrete and continuous spaces

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Proximal Policy Optimization (PPO) on Highway Environment

This repository contains the implementation of Proximal Policy Optimization (PPO) algorithm applied on the highway environment, using both discrete and continuous action spaces.

About the Project

The goal of this project is to implement the PPO algorithm on a highway environment and compare the performance of the algorithm with discrete and continuous action spaces. The highway environment is a classic problem in the field of reinforcement learning, where an agent learns to navigate a car on a highway and avoid collisions with other vehicles.

Getting Started

Prerequisites

To run this project, you will need to have the following installed on your system:

  • Python 3.6 or higher
  • OpenAI Gym
  • Jax
  • Rlax
  • Distrax

Contributing

Contributions are welcome! Please feel free to submit a pull request or raise an issue.

License

This project is licensed under the MIT License.

About

a jax implementation of highway environment PPO agent with discrete and continuous spaces


Languages

Language:Python 100.0%