facebookresearch / Pearl

A Production-ready Reinforcement Learning AI Agent Library brought by the Applied Reinforcement Learning team at Meta.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MultiDiscrete Action space not support

zuoningz opened this issue · comments

Hi

I am trying to run Pearl for a DQN algorithm with my custom environment. and I get the following error. Is it not supported at the moment and is there any workaround? I will paste my code that does the training and interact with environment below. Please let me know if I am doing anything wrong.

NotImplementedError: The Gym space 'MultiDiscrete' is not yet supported in Pearl.

def train(self, alpha):
        for episode in tqdm(range(self.max_episodes)):
            # print(f"+-------- Episode: {episode} -----------+")
            observation, action_space = self.env.reset()
            self.agent.reset(observation, action_space)
       

            while not terminated:
                action = self.agent.act(exploit=False)
                action_alpha_list = [*action, alpha]
                print(action_alpha_list)
                action_result = self.env.step(action_alpha_list)
                self.agent.observe(action_result)
                self.agent.learn()
                terminated = action_result.done

I can also post other parts of my code. (how I initiated the agent and model) if needed. Thanks!

By multi-discrete action space, do you mean a vector like [3, 6, 4] where each entry is an index? If that's what you have in mind, you could quickly write a new action representation module extending https://github.com/facebookresearch/Pearl/blob/main/pearl/action_representation_modules/one_hot_action_representation_module.py to make action representations a concatenation of one-hot representations.

Hope this helps.