v-i-s-h / MAB.jl

A Julia Package for providing Multi Armed Bandit Experiments

Repository from Github https://github.comv-i-s-h/MAB.jlRepository from Github https://github.comv-i-s-h/MAB.jl

MAB.jl - A Package for Bandit Experiments

This package provide a framework for developing and comparing various Bandit algorithms

Available Algorithms

  1. Uniform Strategy (Randomly picking some arm)
  2. ϵ-greedy
    1. ϵ-greedy
    2. ϵ_n greedy
  3. Upper Confidence Bound Policies
    1. UCB1
    2. UCB-Normal
    3. UCB-V
    4. Bayes-UCB (For Bernoulli Rewards)
    5. KL-UCB
    6. Discounted-UCB
    7. Sliding Window UCB
  4. Thompson Sampling
    1. Thompson Sampling
    2. Dynamic Thompson Sampling
    3. Optimistic Thompson Sampling
    4. TSNormal (Thompson Sampling for Gaussian distributed rewards)
    5. Restarting Thompson Sampling
    6. TS With Gaussian Prior
  5. EXP3
    1. EXP3
    2. EXP3.1
    3. EXP3-IX
  6. SoftMax
  7. REXP3
  8. Gradient Bandit

Available Arm Models

  1. Bernoulli
  2. Beta
  3. Normal
  4. Sinusoidal (without noise)
  5. Pulse (without noise)
  6. Square
  7. Variational (without noise)

About

A Julia Package for providing Multi Armed Bandit Experiments

License:Other


Languages

Language:Julia 100.0%