prathameshg11 / AI-Flappy-Bird

AI Flappy Bird Game Solved using Deep Q-Learning and Double Deep Q-Learning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

DQN and Double-DQN

AI Flappy Bird Game Solved using Deep Q-Learning and Double Deep Q-Learning

  • Flappy Bird Game is taken as reference to create the environment.
  • Unnecessary graphics like wing movements is removed to make rendering and training faster.
  • Background is replaced with black color to help the model converge faster due to more GPU computation.

Deep Q Learning

A core difference between Deep Q-Learning and Vanilla Q-Learning is the implementation of the Q-table. Critically, Deep Q-Learning replaces the regular Q-table with a neural network. Rather than mapping a state-action pair to a q-value, a neural network maps input states to (action, Q-value) pairs.

Deep Q-Learning Pseudo code

Double Deep Q-Learning

Double Q-Learning implementation with Deep Neural Network is called Double Deep Q Network (Double DQN). Inspired by Double Q-Learning, Double DQN uses two different Deep Neural Networks, Deep Q Network (DQN) and Target Network.

Reward Stats while Training Deep Q-Network

Flappy Bird

flappy_bird_gif

About

AI Flappy Bird Game Solved using Deep Q-Learning and Double Deep Q-Learning


Languages

Language:Python 100.0%