uber-research / go-explore

Code for Go-Explore: a New Approach for Hard-Exploration Problems

Home Page:https://arxiv.org/abs/1901.10995

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Documentation file

haneenhassen opened this issue · comments

commented

Dear Ecoffet,
As an MSc student, I am currently working on implementing the explore method in the MDPO algorithm, as described in your paper titled "Mirror Descent Policy Optimization" (https://arxiv.org/pdf/2005.09814.pdf). I have been trying to locate the documentation file for this method, but unfortunately, I have been unable to find it.
I would greatly appreciate it if you could provide me with any instructions or guidance on how to implement the explore method in the MDPO algorithm.
Thank you in advance for your assistance. I am eager to learn and apply this method to further enhance the MDPO algorithm.
Gratefully,
Haneen