alexfrom0815 / Online-3D-BPP-PCT

Code implementation of "Learning Efficient Online 3D Bin Packing on Packing Configuration Trees". We propose to enhance the practical applicability of online 3D Bin Packing Problem (BPP) via learning on a hierarchical packing configuration tree which makes the deep reinforcement learning (DRL) model easy to deal with practical constraints and well-performing even with continuous solution space.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RuntimeError: __class__ not set defining

Chengjlzzz opened this issue · comments

Dear author:
Thanks for your sharing! When I ran main.py, I got an error:

Traceback (most recent call last):
File "E:/User002/Online-3D-BPP-PCT-main/main.py", line 6, in
from model import *
File "E:\User002\Online-3D-BPP-PCT-main\model.py", line 4, in
from attention_model import AttentionModel
File "E:\User002\Online-3D-BPP-PCT-main\attention_model.py", line 9, in
class AttentionModelFixed(NamedTuple):
RuntimeError: class not set defining 'AttentionModelFixed' as <class 'attention_model.AttentionModelFixed'>. Was classcell propagated to type.new?

Process finished with exit code 1

python3.8
pytorch1.9.0

I have the same error, and my training environment is python 3.8.5 and pytorch 1.7.1.

I have the same error, and my training environment is python 3.8.5 and pytorch 1.7.1.

I find this problem may be caused due to the version of python and super() function of NamedTuple, and I change the line

return super(AttentionModelFixed, self).__getitem__(key)
to
return tuple.__getitem__(self, key)
, and luckily, it works. But I don't know if this modification will change the result of the code.

Sorry for the delayed reply due to tons of things, the version I'm using is: Python == 3.7.7, torch == 1.10.1, this works on my computer. Hope this helps you.

I have the same error, and my training environment is python 3.8.5 and pytorch 1.7.1.

I find this problem may be caused due to the version of python and super() function of NamedTuple, and I change the line

return super(AttentionModelFixed, self).__getitem__(key)

to
return tuple.__getitem__(self, key)
, and luckily, it works. But I don't know if this modification will change the result of the code.

Thanks, it works.

Sorry for the delayed reply due to tons of things, the version I'm using is: Python == 3.7.7, torch == 1.10.1, this works on my computer. Hope this helps you.

Thanks a lot!

Thanks for this thread, helped me a lot