There are 3 repositories under bandit-algorithm topic.
Thompson Sampling Tutorial
Contextual bandit algorithm called LinUCB / Linear Upper Confidence Bounds as proposed by Li, Langford and Schapire
Another A/B test library
Privacy-Preserving Bandits (MLSys'20)
Solutions and figures for problems from Reinforcement Learning: An Introduction Sutton&Barto
Client that handles the administration of StreamingBandit online, or straight from your desktop. Setup and run streaming (contextual) bandit experiments in your browser.
Movie Recommendation using Cascading Bandits namely CascadeLinTS and CascadeLinUCB
Adversarial multi-armed bandit algorithms
Reinforcement learning
Solutions to the Stanford CS:234 Reinforcement Learning 2022 course assignments.
Research project on automated A/B testing of software by evolutionary bandits.
This presentation contains very precise yet detailed explanation of concepts of a very interesting topic -- Reinforcement Learning.
A small collection of Bandit Algorithms (ETC, E-Greedy, Elimination, UCB, Exp3, LinearUCB, and Thompson Sampling)