ucla-rlcourse / competitive-rl

A set of competitive environments for Reinforcement Learning research.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Compeititive Pong Compeititive Car-Racing

Competitive RL Environments

In this repo, we provide two interesting competitive RL environments:

  1. Competitive Pong (cPong): The environment extends the classic Atari Game Pong into a competitive environment, where both side can be trainable agents.
  2. Competitive Car-Racing (cCarRacing): The environment allows multiple cars to race and compete in the same map.

Installation

pip install git+https://github.com/cuhkrlcourse/competitive-rl.git

Usage

You can easily create the vectorized environment with this function:

from competitive_rl import make_envs

envs = make_envs("CompetitivePongDouble-v0", num_envs=num_envs, asynchronous=True)

See docs in make_envs.py for more information.

Note that for Pong environment, since it is built based on Atari Pong game, we recommand following the standard pipeline to preprocess the observation. We should convert the image to grayscale, resize it and apply frame stacking. Please refer to this function and our wrapper for more information.

If you want to create a single Gym environment instance:

import gym
import competitive_rl

competitive_rl.register_competitive_envs()

pong_single_env = gym.make("cPong-v0")
pong_double_env = gym.make("cPongDouble-v0")

racing_single_env = gym.make("cCarRacing-v0")
racing_double_env = gym.make("cCarRacingDouble-v0")

The observation spaces:

  1. cPong-v0: Box(210, 160, 3)
  2. cPongDouble-v0: Tuple(Box(210, 160, 3), Box(210, 160, 3))
  3. cCarRacing-v0: Box(96, 96, 1)
  4. cCarRacingDouble-v0: Box(96, 96, 1)

The action spaces:

  1. cPong-v0: Discrete(3)
  2. cPongDouble-v0: Tuple(Discrete(3), Discrete(3))
  3. cCarRacing-v0: Box(2,)
  4. cCarRacingDouble-v0: Dict(0:Box(2,), 1:Box(2,))

Acknowledgement

This repo is contributed by many students and alumni from CUHK: Zhenghao Peng (@pengzhenghao), Edward Hui (@Edwardhk), Yi Zhang (@1155107756), Billy Ho (@Poiutrew1004), Joe Lam (@JoeLamKC)

About

A set of competitive environments for Reinforcement Learning research.


Languages

Language:Python 100.0%