hbh22110182 / smartgrid_DRL

Deep reinforcement learning tool for demand response in smart grids with high penetration of renewable energy sources.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

smartgrid_DRL

This repository is a deep reinforcement learning tool for demand response in smart grids, with high penetration of renewable energy sources.


Table of Contents


Description

The respository has been developed along side this master's thesis, which provides the mathematical calculations, data used and analysis results present in this repository. The original work remains practically intact, despite some barely perceptible written imprecisions, which have been updated. It is reproducibe, even in a distributed environment, being able to obtain the same result present in the thesis, as long as the simulations run in a 40 CPU machine. The reproducibility depends on the number of cores used for the distributed simulation, hence the CPU number is used as a seed.

It is used the Stable Baselines3 agents for the explored DRL algorithms, and the Optuna hyperparameter optimization framework.

The repository provides scripts for training and evaluating the agents in a custom environment for the smart grid simulation, distributed hyperparameters tuning, writing report tables and plotting results. The environment is also integrated with the Pandapower power flow calculator.


Installation

Prerequisites

The Python packages used are:

  • kaleido >= 0.2.1
  • natsort >= 8.1.0
  • numba >= 0.56.0
  • optuna >= 3.0.2
  • pandapower >= 2.10.1
  • plotly >= 5.10.0
  • psutil >= 5.9.1
  • sb3_contrib >= 1.6.0
  • sklearn >= 0.0
  • stable-baselines3 >= 1.6.0
  • tensorboard >= 2.10.0

Plotting set-up

Installing latex for plotting

sudo apt-get install python3-graphviz python3-tk texlive-latex-base texlive-latex-extra texlive-fonts-recommended dvipng cm-super

Documentation

The code runs as is. A single-model multi-objective optimization performed in env_tuning is distributed with the env_parallelization program. The same idea is reproduced for a multi-model single-objective optimization performed with the analogous models_tuning and models_parallelization programs.

Once the training and evaluation are done, the files with "visualization" in the name will plot analysis acording to the data from the created "evaluation/" folder. The files with "report" in the name will look at the .db file created by optuna once the study is over, to generate .tex and .csv table reports and plot information from the hyperparameters in the optuna.create_study().


Author

@misc{smartgrid_DRL code,
  title={Deep reinforcement learning tool for demand response in smart grids.},
  author={Fisco, Pau},
  howpublished = {GitHub},
  note = {https://github.com/pau-3i8/smartgrid_DRL}
  volume={1},
  year={2022}
}

Back to the top

About

Deep reinforcement learning tool for demand response in smart grids with high penetration of renewable energy sources.

License:MIT License


Languages

Language:Python 98.5%Language:Cython 1.5%