tahanakabi / DRL-for-microgrid-energy-management

We study the performance of various deep reinforcement learning algorithms for the problem of microgrid’s energy management system. We propose a novel microgrid model that consists of a wind turbine generator, an energy storage system, a population of thermostatically controlled loads, a population of price-responsive loads, and a connection to the main grid. The proposed energy management system is designed to coordinate between the different sources of flexibility by defining the priority resources, the direct demand control signals and the electricity prices. Seven deep reinforcement learning algorithms are implemented and empirically compared in this paper. The numerical results show a significant difference between the different deep reinforcement learning algorithms in their ability to converge to optimal policies. By adding an experience replay and a second semi-deterministic training phase to the well-known Asynchronous advantage actor critic algorithm, we achieved considerably better performance and converged to superior policies in terms of energy efficiency and economic value.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Deep Reinforcement Learning for Microgrid Energy Management

This repository contains an implementation of a Deep Reinforcement Learning (DRL) algorithm for managing the energy demand and supply of a microgrid. The implementation is built using Python and is based on the OpenAI Gym environment.

Installation

Clone the repository and navigate to the directory
Create a conda environment
conda env create -f conda.yaml
Activate the environment
conda activate tf2-gpu

Usage

To train the DRL agent, you can use the A3C_plusplus.py file.
python A3C_plusplus.py --train

To evaluate the performance of a trained model, you can use the same file with the option --test.

python A3C_plusplus.py --test

Contributing

Contributions to this repository are welcome! If you find a bug or have an idea for an improvement, please submit a pull request.

License

This code is released under the MIT License. More information about this project can be found at: https://doi.org/10.1016/j.segan.2020.100413

About

We study the performance of various deep reinforcement learning algorithms for the problem of microgrid’s energy management system. We propose a novel microgrid model that consists of a wind turbine generator, an energy storage system, a population of thermostatically controlled loads, a population of price-responsive loads, and a connection to the main grid. The proposed energy management system is designed to coordinate between the different sources of flexibility by defining the priority resources, the direct demand control signals and the electricity prices. Seven deep reinforcement learning algorithms are implemented and empirically compared in this paper. The numerical results show a significant difference between the different deep reinforcement learning algorithms in their ability to converge to optimal policies. By adding an experience replay and a second semi-deterministic training phase to the well-known Asynchronous advantage actor critic algorithm, we achieved considerably better performance and converged to superior policies in terms of energy efficiency and economic value.

License:MIT License


Languages

Language:HTML 95.4%Language:Python 4.6%Language:PowerShell 0.0%Language:CSS 0.0%Language:Batchfile 0.0%Language:Shell 0.0%