Abhipanda4 / Battle-Of-DQNs

A comparision between the performances of DQN and several of its variants using PyTorch and Pong.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Batlle Of Deep Q-Nets

This repo contains implementations of Deep Q Network(DQN) and its variants: Double DQN and Duelling DQN. A variant which uses a duelling architecture and calculates loss in Double DQN style is also included. For evaluating performance, PongDeterministic-v4 was used since it converges very fast.

References:

Code Reference:

The code is highly inspired from https://github.com/higgsfield/RL-Adventure. The network architecture and hyperparameters are directly borrowed from this repo.

Rewards vs Episodes:

reward_curve

Loss vs Episodes:

loss_curve

About

A comparision between the performances of DQN and several of its variants using PyTorch and Pong.


Languages

Language:Python 100.0%