BIT-MCS / Cadre

[AAAI 2022] CADRE: A Cascade Deep Reinforcement Learning Framework for Vision-based Autonomous Urban Driving

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Cadre

This is the code accompanying the paper: "CADRE: A Cascade Deep Reinforcement Learning Framework for Vision-based Autonomous Urban Driving" by Yinuo Zhao, Kun Wu, et. al, published at AAAI 2022.

πŸ“„ Description

CADRE is a novel CAscade Deep REinforcement learning framework to achieve model-free vision-based autonomous urban driving on CARLA benchmark. We also provide an environment wrapper for CARLA that is suitable for distributed DRL training.

Installation

  1. Clone repo

    git clone https://github.com/BIT-MCS/Cadre.git
    cd Cadre
    
  2. Create conda virtue environment and install dependent packages

    conda create -n cadre python=3.7
    conda activate cadre
    pip install -r requirements.txt
    
  3. Download the trained perception model from Google Driver under carla_perception/

  4. Download the Carla 0.9.10 server from official website.

πŸ’» Quick Start

To quickly test the installation, we provide a simple script . To run this example, you need to first start the server and then start the client.

To start server, run the script under scripts/start_server.sh. Make sure to replace the CARLA_ROOT to your own directory.

To start training client, change the command in we provide a training script under scripts/simple_test.sh, you need to change the CARLA_ROOT and CHALLENGE_DIR.

If the installation is successful, then you will see the following two windows on your computer.

image

πŸ’» Training

To start server, run the script under scripts/start_server.sh. Make sure to replace the CARLA_ROOT to your own directory.

To start training client, we provide a training script under scripts/main.sh, you need to change the CARLA_ROOT and CHALLENGE_DIR. The hyperparamters are configured under /config_files/agent_config.py. We recommend you to change the hyperparameter num_processes to 4 in order to get a more stable policy.

Models and log files are saved under result/

After the training process finished, we recommend you to use the script under scripts/kill_server.sh to kill all servers running on the server.

#!/bin/bash
export CARLA_ROOT=[PATH TO YOUR LOCAL DIRECTORY WITH CaraUE4.sh]
export CHALLENGE_DIR=[PATH TO WHERE]

export PYTHONPATH=$PYTHONPATH:$CARLA_ROOT/PythonAPI/carla
export PYTHONPATH=$PYTHONPATH:$CARLA_ROOT/PythonAPI/carla/dist/carla-0.9.10-py3.7-linux-x86_64.egg           # 0.9.10
export PYTHONPATH=$PYTHONPATH:$CHALLENGE_DIR/leaderboard
export PYTHONPATH=$PYTHONPATH:$CHALLENGE_DIR/scenario_runner
export HAS_DISPLAY='0'

python main.py

πŸ’» Evaluation

To evaluate the models, please refer to the scripts under scrips/eval.sh. Please set the pretrained_path and load_episodein eval_cfg under config_files/eval_agent_config.py. It is recommended to use 8 models from different episodes for a more stable policy. You can also change the amount of vehicles/pedestrians and routes in this config files.

After evaluation ends, you can find the evaluation results under ${pretrained_path}/eval/eval_completion_ratio.csv.

πŸ“œ Acknowledgement

This work was supported in part by Shanghai Pujiang Program and the National Research and Development Program of China (No. 2019YQ1700).

πŸ“§ Contact

If you have any question, please email ynzhao@bit.edu.cn / linda.chao.007@gmail.com.

Note

This project includes some implementations of DANet and the overall evaluation framework follows CARLA secnario runner, carla_project (no license) and leaderboard.

Paper

If you are interested in our work, please cite our paper as

@inproceedings{zhao2022cadre,
  author    = {Zhao, Yinuo and Wu, Kun and Xu, Zhiyuan and Che, Zhengping and Lu, Qi and Tang, Jian and Liu, Chi Harold},
  title     = {CADRE: A Cascade Deep Reinforcement Learning Framework for Vision-based Autonomous Urban Driving},
  booktitle = {Association for the Advancement of Artificial Intelligence (AAAI)},
  year      = {2022},
}

About

[AAAI 2022] CADRE: A Cascade Deep Reinforcement Learning Framework for Vision-based Autonomous Urban Driving

License:MIT License


Languages

Language:Python 99.9%Language:Shell 0.1%