MushroomRL / mushroom-rl

Python library for Reinforcement Learning.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question:How is reward defined for Atari Pong?

goplyer opened this issue · comments

Using the code in mushroom-rl/docs/source/tutorials/code/dqn.py I trained a network to play Atari Pong. After training, I ran 1000 episodes of one game each to check the winrate of the network versus the Atari computer player. For about a quarter of the episodes the reward reported by core.evaluate was zero. I wonder what this means and if it is an intended result.

Pong is two-player game. Each player scores points and a game ends when one player gets a score of 21. The game can not end in a tie. A natural definition of the reward for a game would be score(network) - score(Atari player). This can not be zero. What is the intended reward if it is not the natural definition?

The reward in Pong, as in all the Atari games, is the same returned by the gym environment. In the case of Pong, it is +1 when the agent scores, -1 when the opponent scores.
Normally, the cumulative discounted reward J, printed after each epoch, starts from -21, and slowly improves until 21. I'm not really sure where you see reward of 0. Are you looking at the dataset returned by the evaluate?

Yes, I am looking at the dataset returned by evaluate and averaged by get_stats. I expect things to work as you described. Here a sample of the output. Thank you for your comment.

min_reward: 4.000000, max_reward: 4.000000, mean_reward: 4.000000, games_completed: 1 min_reward: 5.000000, max_reward: 5.000000, mean_reward: 5.000000, games_completed: 1 min_reward: 0.000000, max_reward: 0.000000, mean_reward: 0.000000, games_completed: 1 min_reward: -1.000000, max_reward: -1.000000, mean_reward: -1.000000, games_completed:1 min_reward: 3.000000, max_reward: 3.000000, mean_reward: 3.000000, games_completed: 1 min_reward: 5.000000, max_reward: 5.000000, mean_reward: 5.000000, games_completed: 1

So the dataset returned by evaluate contains all the steps. So it is natural to see many transitions with 0 reward.
From the results you posted, I see some weird behavior. The completed games are always only 1, that also explains why minimum, mean, and maximum rewards are the same. I suggest you to check the way you are doing the evaluation, e.g. check that the number of steps is sufficiently high.

My intention is to determine the winrate so I have to examine the cumulative reward one game at a time and run multiple games. There is nothing weird there. I Inserted a line of code into core.py to record what is happening point by point.

    next_state, reward, absorbing, _ = self.mdp.step(action)

    ##Testing point by point

    if reward != 0.0: print(reward, absorbing, flush=True)

    ##
    self._episode_steps += 1

The results from couple sample games are here:
pygame 1.9.6 Hello from the pygame community. https://www.pygame.org/contribute.html -1.0 False -1.0 False 1.0 False -1.0 False 1.0 False -1.0 False 1.0 False 1.0 False -1.0 False -1.0 False 1.0 False -1.0 False -1.0 False 1.0 False 1.0 False -1.0 False 1.0 False 1.0 False 1.0 False -1.0 False 1.0 False -1.0 False 1.0 False -1.0 False -1.0 False 1.0 False -1.0 False 1.0 False -1.0 False 1.0 False -1.0 False 1.0 False 1.0 False 1.0 False 1.0 False 1.0 False -1.0 False 1.0 True min_reward: 3.000000, max_reward: 3.000000, mean_reward: 3.000000, games_completed: 1
-1.0 False -1.0 False 1.0 False -1.0 False 1.0 False -1.0 False 1.0 False -1.0 False 1.0 False -1.0 False 1.0 False -1.0 False -1.0 False -1.0 False -1.0 False 1.0 False -1.0 False -1.0 False -1.0 False 1.0 False -1.0 False 1.0 False 1.0 False 1.0 False -1.0 False 1.0 False 1.0 False 1.0 False -1.0 False 1.0 False 1.0 False 1.0 False -1.0 False 1.0 False 1.0 False -1.0 False 1.0 False -1.0 False 1.0 False -1.0 False -1.0 True min_reward: 0.000000, max_reward: 0.000000, mean_reward: 0.000000, games_completed: 1

If you add up the plus and minus ones you can verify that the sum disagrees with mean_reward by + or - 1. I suspect that in processing the dataset the last point of the game, when absorbing is True (an edge case), is not handled properly but I'm not expert enough to track it down.

Thanks for your feedback. It was actually a bug affecting the function compute_metrics used in the atari experiment. It happens in some cases that, as you say, the reward of the last step is not counted. We fixed the bug in the dev branch. We are currently working on an important new release with several functionalities, e.g. online plotting of results, saving and loading of agents. We will soon merge dev branch in master with all these new functionalities, included bug fixing.
Thanks again. I'll close this issue.
Best regards.