gym version will affect the usage of ray[rllib]
Morphlng opened this issue · comments
To enable "Multi-Agent" environment training in rllib
, you have to inherit the base class MultiAgentEnv
from ray.rllib.env
. Macad-gym have already done this in multi_env.py
.
macad-gym/src/macad_gym/carla/multi_env.py
Lines 237 to 243 in 912944c
However, since gym==0.21.0
, the environment created with gym.make
will automatically wrapped in a Class called OrderEnforcing
. This will break the inheritance check in rllib, causing a training session failure.
This is probably something ray[rllib]
should take care of, I'm just reporting this to help if anybody have met this problem.
There are also a bunch of incompatible api usage in the code should be noted. For example, some _private_attribute
(e.g. Box._shape) is no longer accessible. User should whether stick with gym==0.12.6
or we would have to do a lot of job to handle these problems.
Thanks for identifying this and posting with details! Like you pointed out, the OpenAI Gym library started wrapping the env with the OrderEnforcing
wrapper to ensure the env throws an error if env.step(...)
is called before an initial env.reset()
.
For users using latest version of gym
(gym>=0.21.0
) with ray[rllib]
, current workaround is to replace env
with env.wrapped
in the user's code. This workaround is needed not just with MACAD-Gym environments but likely with any MultiAgent Gym environment with a wrapper class. The ray[rllib]
team has decided not to support Gym wrappers in their MultiAgent Env implementation.