openai / maddpg

Code for the MADDPG algorithm from the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"

Home Page:https://arxiv.org/pdf/1706.02275.pdf

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question regarding the replay buffers and the Critic networks. (duplicates in the state)

opt12 opened this issue · comments

commented

Hello everybody!

As far as I can see from the code, each agent maintains its own replay buffer.

In the training step, when sampling the minibatch, the observations of all agents are collected and concatenated.

for i in range(self.n):
obs, act, rew, obs_next, done = agents[i].replay_buffer.sample_index(index)
obs_n.append(obs)
obs_next_n.append(obs_next)
act_n.append(act)

As far as I can see, this would lead to duplicates in the state input to the agent's critic function. If there are components of the environment-state which are part of every agent's observation, these components would be contained the critic's input multiple times.

Is this true, or do I miss anything?

Does this (artificial) state expansion have any adverse effects on the critic, or can we safely assume, that the critic will learn quite fast, that the input values at some input nodes are always identical and hence can be treated commonly?

Are there any memory issues due to the multiple storage of the state components in each of the agents' replay buffer? (Probably, memory is not an issue with RL guys, but I have a background in embedded systems)

I would be very grateful for some more insight on this topic.

Regards,
Felix