praveen-palanisamy / macad-gym

Multi-Agent Connected Autonomous Driving (MACAD) Gym environments for Deep RL. Code for the paper presented in the Machine Learning for Autonomous Driving Workshop at NeurIPS 2019:

Home Page:https://arxiv.org/abs/1911.04175

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

gym version will affect the usage of ray[rllib]

Morphlng opened this issue · comments

To enable "Multi-Agent" environment training in rllib, you have to inherit the base class MultiAgentEnv from ray.rllib.env. Macad-gym have already done this in multi_env.py.

MultiAgentEnvBases = [MultiActorEnv]
try:
from ray.rllib.env import MultiAgentEnv
MultiAgentEnvBases.append(MultiAgentEnv)
except ImportError:
logger.warning("\n Disabling RLlib support.", exc_info=True)

However, since gym==0.21.0, the environment created with gym.make will automatically wrapped in a Class called OrderEnforcing. This will break the inheritance check in rllib, causing a training session failure.

gym_version

This is probably something ray[rllib] should take care of, I'm just reporting this to help if anybody have met this problem.

There are also a bunch of incompatible api usage in the code should be noted. For example, some _private_attribute (e.g. Box._shape) is no longer accessible. User should whether stick with gym==0.12.6 or we would have to do a lot of job to handle these problems.

Thanks for identifying this and posting with details! Like you pointed out, the OpenAI Gym library started wrapping the env with the OrderEnforcing wrapper to ensure the env throws an error if env.step(...) is called before an initial env.reset().
For users using latest version of gym (gym>=0.21.0) with ray[rllib], current workaround is to replace env with env.wrapped in the user's code. This workaround is needed not just with MACAD-Gym environments but likely with any MultiAgent Gym environment with a wrapper class. The ray[rllib] team has decided not to support Gym wrappers in their MultiAgent Env implementation.