ADGEfficiency / energy-py

Reinforcement learning for energy systems

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Electricity Price for Electric battery storage

opened this issue · comments

Hi Adam,
I am just trying your demo on Electric battery storage, but there is something strange with my results.
The Electricity price in "observation.csv" are different from results in your demo.

your demo:
image

results with "observation.csv":
image

Can you also help me to understand what represent the diffrent columns (
C_forecast_electricity_price_hh_0 [$/MWh] | C_forecast_electricity_price_hh_1 [$/MWh] | C_forecast_electricity_price_hh_2 [$/MWh] | C_forecast_electricity_price_hh_3 [$/MWh])?
Are forecast for the following 1,2,3 hours?

Best regards,

Luca

Hi Luca,

I've made changes to the electricity price data - mostly to make it more interesting (i.e. more spikes). You do raise an interesting point about consistently of package data though - I'll have a think about this.

The columns are the forecasts for the next half hours - 0 being the current, 1 being the next etc.

Let me know if you have any more questions - Adam.

Thanks Adam.

In this moment I would like to use your implementation of DPL Agent, but with a custom environment.
I would like to model a HVAC system and I need to develop this enviroment.

Luca

Hi Adam,

executing demo with battery example, I have got the follwing issue:

image

I think there is a problem with line 90 in "env_info.py":
output.index = env.state_space.episode.index[:-1]

Is it possible?

Regards,

Hi Luca - I've fixed this, the commit is here.

Regarding HVAC - I've done some work on electric chillers. I would suggest looking for some open source simulation models, which we can wrap energy-py around.

Hi @ADGEfficiency ,
when you talk about open source simulation models, do you mean like Energy Plus software?

I would like to create a variable that is controlled by my action. Action 1 sum a value, action 2 decrease a value. How can I constrain this variable in a maximum and minimum range?
If i pass my reward, the algorithm tends to always choose a single action.