takuseno / d3rlpy

An offline deep reinforcement learning library

Home Page:https://takuseno.github.io/d3rlpy

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[QUESTION] Offline Learning via custom MDPDataset

Charles-Lim93 opened this issue · comments

Greetings,

I'm looking for the document for creating a custom/own MDPDataset,

But I'm wondering how to train the model with my own MDPDataset.

Because I'm using my environment for simulation, and don't have an idea for combining with d3rlpy's environments. Is there any way to conduct learning with custom environments? (eg. Airsim, Nvidia drive sim)

Can anyone share or suggest an example code for using your own MDP dataset?

Thank you in advanced.

@Charles-Lim93 Hi, thanks for the issue. d3rlpy supports interface of OpenAI Gym and Gymnasium. You can bridge arbitrary simulators via either of those interface. How to make the bridge interface is out of d3rlpy's scope. Please ask the question at their repositories.