ai4co / rl4co

A PyTorch library for all things Reinforcement Learning (RL) for Combinatorial Optimization (CO)

Home Page:https://rl4.co

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[BUG] PDPEnv is not importable

Junyoungpark opened this issue · comments

Describe the bug

PDPEnv is not importable.

Python 3.9.16 (main, Mar  8 2023, 14:00:05)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from rl4co.envs import PDPEnv
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/silab9/proj/rl4co/rl4co/envs/__init__.py", line 1, in <module>
    from rl4co.envs.base import RL4COEnvBase
  File "/home/silab9/proj/rl4co/rl4co/envs/base.py", line 9, in <module>
    from rl4co.data.dataset import TensorDictDataset
ModuleNotFoundError: No module named 'rl4co.data'
>>>

To Reproduce

Steps to reproduce the behavior.

Please try to provide a minimal example to reproduce the bug. Error messages and stack traces are also helpful.

Please use the markdown code blocks for both code and stack traces.

pip install -e .

from rl4co.envs import PDPEnv

Expected behavior

The PDPEnv is being importable.

Reason and Possible fixes

If you know or suspect the reason for this bug, paste the code lines and suggest modifications.

Checklist

  • I have checked that there is no similar issue in the repo (required)
  • I have provided a minimal working example to reproduce the bug (required)

It works on my end, have you checked the error message? Is there any reason why it is not importable?