aaravrav142 / AEMCARL

Reinforcement Learning Based Collision Avoidance with Adaptive Environment Modeling for Crowded Scenes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AEMCARL

This project is based on the CrowdNav.

Abstract

The major challenges of collision avoidance for robot navigation in crowded scenes lie in accurate environment modeling, fast perceptions, and trustworthy motion planning policies. This paper presents a novel adaptive environment model based collision avoidance reinforcement learning (i.e., AEMCARL) framework for an unmanned robot to achieve collision-free motions in challenging navigation scenarios. The novelty of this work is threefold: (1) developing a hierarchical network of gated-recurrent-unit (GRU) for environment modeling; (2) developing an adaptive perception mechanism with an attention module; (3) developing an adaptive reward function for the reinforcement learning (RL) framework to jointly train the environment model, perception function and motion planning policy. The proposed method is tested with the Gym-Gazebo simulator and a group of robots (Husky and Turtlebot) under various crowded scenes. Both simulation and experimental results have demonstrated the superior performance of the proposed method over baseline methods.

Citation

If you use rllab for academic research, you are highly encouraged to cite the following paper:

@article{wang2022adaptive,
  title={Adaptive Environment Modeling Based Reinforcement Learning for Collision Avoidance in Complex Scenes},
  author={Wang, Shuaijun and Gao, Rui and Han, Ruihua and Chen, Shengduo and Li, Chengyang and Hao, Qi},
  journal={arXiv preprint arXiv:2203.07709},
  year={2022}
}

Method Overview

Setup

  1. Install Python-RVO2 library
  2. Install crowd_sim and crowd_nav into pip
pip install -e .
  1. Create environment
conda env create -f env.yaml

Getting started

This repository is organized in two parts: gym_crowd/ folder contains the simulation environment and crowd_nav/ folder contains codes for training and testing the policies. Details of the simulation framework can be found here. Below are the instructions for training and testing policies, and they should be executed inside the crowd_nav/ folder.

  1. Train a policy.
python train.py --policy actenvcarl --test_policy_flag 5 --multi_process self_attention --optimizer Adam  --agent_timestep 0.4 --human_timestep 0.5 --reward_increment 2.0 --position_variance 2.0  --direction_variance 2.0
  1. Test policies with 500 test cases.
python test.py --policy actenvcarl --test_policy_flag 5 --multi_process self_attention --agent_timestep 0.4 --human_timestep 0.5 --reward_increment 2.0 --position_variance 2.0  --direction_variance 2.0 --model_dir data/output
  1. Run policy for one episode and visualize the result.
python test.py --policy actenvcarl --test_policy_flag 5 --multi_process self_attention --agent_timestep 0.4 --human_timestep 0.5 --reward_increment 2.0 --position_variance 2.0  --direction_variance 2.0 --model_dir data/output --phase test --visualize --test_case 0
  1. Visualize a test case.
python test.py --policy actenvcarl --test_policy_flag 5 --multi_process self_attention --agent_timestep 0.4 --human_timestep 0.5 --reward_increment 2.0 --position_variance 2.0  --direction_variance 2.0 --model_dir data/output --phase test --visualize --test_case 0
  1. Plot training curve.
python utils/plot.py data/output/output.log

Simulation Videos

AEMCARL

Gazebo(4X)

About

Reinforcement Learning Based Collision Avoidance with Adaptive Environment Modeling for Crowded Scenes

License:MIT License


Languages

Language:Python 88.3%Language:C++ 8.0%Language:CMake 3.6%Language:Shell 0.2%