Victor-Martinez-Pozos / DRL-HC-methods

In this repo I explore the Hill Climbing improvements like adaptative nopise scaling and cross-entropy to use them to solve the enviroment CartPole-v0 from OpenAI-GYM.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Hill Climbing methods

Description

In this repo I explore the Hill Climbing improvements like adaptative nopise scaling and cross-entropy to use them to solve the enviroment CartPole-v0 from OpenAI-GYM.

Usage

The RL algorithm is located under the "ce_w_ans_agent.py" file and to test it working over the gym environment you shuld run the jupyter notebook OpenAI_Gym_CartPole-v0.ipynb in which you can train the agent from scratch or coment the training fase and load the weigths to test it.

Installation

To use this code you need to install the following packages:

  • gym
  • numpy
  • jupiyter
  • matplotlib

License

GNU General Public License v3.0

About

In this repo I explore the Hill Climbing improvements like adaptative nopise scaling and cross-entropy to use them to solve the enviroment CartPole-v0 from OpenAI-GYM.

License:GNU General Public License v3.0


Languages

Language:Jupyter Notebook 96.3%Language:Python 3.7%