BIT-XJY / RL-based-Transferable-EMS

[IV 2022] A Comparative Study of Reinforcement Learning-based Transferable Energy Management Strategies for Hybrid Electric Vehicles.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

A-Comparative-Study-of-Reinforcement-Learning-based-Transferable-EMS-for-HEVs

Source code for A Comparative Study of Reinforcement Learning-based Transferable Energy Management Strategies for Hybrid Electric Vehicles for IV 2022. More results will be presented in the next work.

[IEEE Xplore IV 2022] [ArXiv]

If you use our implementation in your academic work, please cite the corresponding paper:

@INPROCEEDINGS{xu2022iv,
  author={Xu, Jingyi and Li, Zirui and Gao, Li and Ma, Junyi and Liu, Qi and Zhao, Yanan},
  booktitle={2022 IEEE Intelligent Vehicles Symposium (IV)}, 
  title={A Comparative Study of Deep Reinforcement Learning-based Transferable Energy Management Strategies for Hybrid Electric Vehicles}, 
  year={2022},
  pages={470-477},
  doi={10.1109/IV51971.2022.9827042}}

Abstract

The deep reinforcement learning-based energy management strategies (EMS) has become a promising solution for hybrid electric vehicles (HEVs). When driving cycles are changed, the network will be retrained, which is a time-consuming and laborious task. A more efficient way of choosing EMS is to combine deep reinforcement learning (DRL) with transfer learning, which can transfer knowledge of one domain to the other new domain, making the network of the new domain reach convergence values quickly. Different exploration methods of RL, including adding action space noise and parameter space noise, are compared against each other in the transfer learning process in this work. Results indicate that the network added parameter space noise is more stable and faster convergent than the others. In conclusion, the best exploration method for transferable EMS is to add noise in the parameter space, while the combination of action space noise and parameter space noise generally performs poorly.

Preparation

Before starting to carry out some relevant works on our framework, some preparations are required to be done.

Hardware

Our framework is developed based on a laptop, and the specific configuration is as follows:

  • CPU: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz 2.59 GHz
  • GPU: RTX 2070s

Dependencies

Before using our code, the following dependencies are needed:

  • tensorflow 1.13.2
  • numpy 1.21.2
  • scipy 1.7.1
  • matplotlib 3.4.3

Tutorial

Data_Standard Driving Cycles: training data and test data

DDPG_Prius_source_adding_noise.py: the code of training in the source domain. If adding action space noise, you can set "action_noise_type" as "gs" or "ou" to add a simple Gaussian noise or a more advanced Ornstein-Uhlenbeck (OU) correlated noise process, respectively. If not adding action space noise, you can just set "action_noise_type" as "None". If adding parameter space noise, you can set the hyper-parameter "param_noise" of the choose_action function as True. Otherwise, set it as False.

DDPG_Prius_transfer_learning.py: the code of training in the target domain. The method of adding different types of noise is the same as training in the source domain.

About

[IV 2022] A Comparative Study of Reinforcement Learning-based Transferable Energy Management Strategies for Hybrid Electric Vehicles.

License:MIT License


Languages

Language:Python 100.0%