openai / baselines

OpenAI Baselines: high-quality implementations of reinforcement learning algorithms

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

In baselines/common/distributions.py, CategoricalPd.Sample seems have a bug.

morenfang opened this issue · comments

commented

I found that when calling the CategoricalPd.Sample(), the sampling results are very biased. After inspection, it is found that self.logits should be tf.log(self.logits).
According to this page: https://en.wikipedia.org/wiki/Categorical_distribution

def sample(self):
     u = tf.random_uniform(tf.shape(self.logits), dtype=self.logits.dtype)
     return tf.argmax(self.logits - tf.log(-tf.log(u)), axis=-1)

def sample(self):
     u = tf.random_uniform(tf.shape(self.logits), dtype=self.logits.dtype)
     return tf.argmax(tf.log(self.logits) - tf.log(-tf.log(u)), axis=-1)

I also did experiments to verify this result. After adding tf.log, the sampling data conforms to the given distribution.