google-deepmind / open_spiel

OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Implement ReBeL with Public State API

nimitpattanasri opened this issue · comments

Hi,

I would like to study and implement ReBeL in OpenSpiel.

I learn from here that public states are fundamental for not only this algorithm but also DeepStack, Pluribus, and MCCR (Monte Carlo Continual Resolver). And I came across this Public State API by @michalsustr but can no longer find it in the main repo anymore.

Is Public State API still a recommended way to implement ReBeL? Is it supplanted by FOG (factored observation game) or sequential bayesian game? Where should I get started?

Hi, yes this was removed because it was not maintained.

I would say it would be easier to star from a framework that natively supports public states from the start, rather than building them on top of OpenSpiel.

The ReBeL paper linked to some open code.. @michalsustr would that be a good place to start? Do you know of any general code bases that supports FOGs or public states natively?

Tagging @ssokota too. He might have some pointers.

I'm not aware of any open-source frameworks that natively support public-private factorization, unfortunately.

@lanctot Your suggestion saves me a lot of effort. Thank you.

Hi @ssokota, I see your implementation for Trade Comm public belief state here. Do you think it would be useful if I look into the codebase and make some adaptation to ReBeL?

If your goal is to make a ReBeL implementation that is general to OpenSpiel, I think it makes sense to implement an API for public observations. If you just want to do a Texas Hold'em implementation, you could just use an existing game implementation and have the ReBeL implementation take care of public information extraction internally.

@ssokota Make sense. Thank you for your suggestion.