huawei-noah / SMARTS

Scalable Multi-Agent RL Training School for Autonomous Driving

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Help Request] Observation

UnicornHJ opened this issue · comments

High Level Description

Hello.
I am currently using smarts for related project development. But have encountered many problems during the process

  1. I would like to adjust the vehicle observation, add masks to the vehicles not appearing in the scene (which will suddenly appear in the future) and the vehicles that may appear in the future, or modify the observation range, how should I adjust the observation part?
  2. How should I run the benchmarks such as PPO and SAC in baseline? The official website doesn't give me the way to run the baseline.
    Thank you for your help.

Version

1.0

Operating System

ubuntu20.04

Problems

Hello.
I am currently using smarts for related project development. But have encountered many problems during the process

  1. I would like to adjust the vehicle observation, add masks to the vehicles not appearing in the scene (which will suddenly appear in the future) and the vehicles that may appear in the future, or modify the observation range, how should I adjust the observation part?
  2. How should I run the benchmarks such as PPO and SAC in baseline? The official website doesn't give me the way to run the baseline.
    Thank you for your help.

Hello @UnicornHJ, sorry, I cannot reply right now we will reply tomorrow.

Thank you very much for your reply
Looking forward to your reply, thanks

Hello @UnicornHJ, the MARL benchmark is fairly old and has been moved out of SMARTS since it was connected to smarts==0.4.16. We store a version of it at https://github.com/smarts-project/smarts-project.rl/tree/master/marl_benchmark.

  • If you want to run different algorithms you will need to call into different configurations which are in the marl_benchmark/agents directory:
    • python3.8 run.py scenarios/sumo/intersections/4lane -f agents/maddpg/baseline-lane-control.yaml
    • marl_benchmarks/agents/__init__.py::_make_rllib_config which really calls down through FrameStack.get_observation_space and FrameStack.get_observation_adapter. This configuration comes through here:
agent:
  state:
    wrapper:
      name: FrameStack
      num_stack: 3
  • FrameStack is dynamically loaded from marl_benchmark.wrappers.rllib.__init__.py. So adding a wrapper there (or injecting the wrapper into marl_benchmark.wrappers.rllib's module attributes) would allow selecting it from the yaml file.
  • You would want such a wrapper to inherit from marl_benchmark.wrappers.rllib.wrapper.Wrapper.
  • For reference of a Wrapper implementation: FrameStack wrapper uses CalObs from marl_benchmark/common.py the feature names match the cal_* methods on CalObs (e.g. steering in agents/maddpg/baseline-lane-control.yaml matches CalObs.cal_steering.) For feature keys that have non-boolean values they are only added if they are included.
agent:
  state:
  features:
    goal_relative_pos: True # enable (bool)
    distance_to_center: True # enable (bool)
    speed: True # enable (bool)
    steering: True # enable (bool)
    heading_errors: [20, continuous] # num_waypoints (int), continuous/spaced (str)
    neighbor: 8 # num_closest_neighbors (int)
    lane_its_info: False # enable intersection info (bool)
    img_gray: 256 # resize grayscale (int)
    proximity: False # enable (bool)
  • The CalObs features are mapped to the SPACE_LIB spaces, any change to one may need a change to the other.
  • The configuration for the observation_adaptor and observation_space is generated within
    If you wish the MARL benchmark to be modernized, you would need to put in a request for that.

Thank you very much for your reply.
I will try it.
Also , may I ask if your email can receive emails? A few more questions have just been emailed to you!
Or could you please provide me with another e-mail address?

Hello, @UnicornHJ, I received your email. It looks like it went to spam. I will see if I can answer your questions.

Ego-centric observations (unfortunately) were not native to SMARTS aside from perhaps lidar. The camera observations are top down. So simulating vehicle occlusion would take a bit of work.

I can answer in more detail later by email if you are still interested.

Thank you very much, I am very interested in this area and look forward to your reply!