markub3327 / deep_exploration_with_E_network

for eecs 598 class final project

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

README

1 Purpose

Reproduce results in ICLR submission DORA

2 Running the code

To reproduce the results in our reproduction workshop paper, please read the following setup

2.1 Function approximation

The code has multiple arguments that you can pass in (e.g., to use Dora or DQN, to use epsilon greedy or softmax, to render the environment or not, choose the environment to run). You can read about the options by running

python main.py -h

An example run using mountain car environment with the paper’s setting is

python main.py -m dora -a softmax -g mountain_car

To repeat code in parralel using the same setting, run

python run_parallel.py -m dora -a softmax -g mountain_car

by default, this repeat 10 runs of the same experiment

2.2 Tabular setting

Please refer to env/readme.txt

3 register the bridge environment

get your gym file location, call it gym/

import gym
import os
print(os.path.dirname(gym.__file__))

then add the following line to gym/envs/__init__.py

register(
    id="BridgeEnv-v0",
    entry_point="gym.envs.bridge.bridge:BridgeEnv",
)
register(
    id="BridgeLargeEnv-v0",
    entry_point="gym.envs.bridge.bridge:BridgeLargeEnv",
)

then make a directory in gym/envs/bridge and put env/bridge.py in that directory

To run the code

import gym
env_small = gym.make("BridgeEnv-v0")
env_large = gym.make("BridgeLargeEnv-v0")

4 Conclusion of our replication

We verified that DORA is working in the tabular setting. However, DORA’s experiments using function approximation put DQN into a disadvantageous position (not a fair comparison). We are able to adjust the setting to get much better result using DQN.

For replication of our setting, switch to branch openai (named by referencing openai setting), and execute

python main.py -m dqn -g mountain_car.py -l logs

Then logs/dqn_default.pkl caches the rewards of this run. You should be able to verify it worked by executing code in plot.ipynb

5 Authors Response

Authours response and the whole openreview process can be found here: https://openreview.net/forum?id=ry1arUgCW

In short, the first experiment needed crucial fixes but basically the problem was changed so no agent converged at all (not that E values were worse, just with no change). Fixing the problem allowed replicatin. On the different problem it was still the case that E values were better on average, even though an example could be found in which a single DQN run was better than a single E values based run. Additionally, freeway experiments were added to the paper replicating the results on another problem (not found in the replication due to the review process).

About

for eecs 598 class final project


Languages

Language:Jupyter Notebook 88.6%Language:Python 11.4%