mit-han-lab / torchsparse

[MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.

Home Page:https://torchsparse.mit.edu

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[BUG] AttributeError: module 'torchsparse.backend' has no attribute 'build_kernel_map_subm_hashmap'

huohuohuohuohuohuohuohuo opened this issue · comments

commented

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

When I run the test.py in this project, I got 2 errors:

FAILED (errors=2)

Error
Traceback (most recent call last):
File "/home/hx/PycharmProjects/torchsparse/tests/test.py", line 20, in test_single_layer
mean_adiff, max_rdiff = test_single_layer_convolution_forward(
File "/home/hx/PycharmProjects/torchsparse/tests/python/test_single_layer_conv.py", line 202, in test_single_layer_convolution_forward
out = model(feats_t, coords_t)
File "/home/hx/anaconda3/envs/torchsparse/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/hx/PycharmProjects/torchsparse/tests/python/test_single_layer_conv.py", line 61, in forward
return self.net(ts_tensor)
File "/home/hx/anaconda3/envs/torchsparse/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/hx/anaconda3/envs/torchsparse/lib/python3.9/site-packages/torch/nn/modules/container.py", line 204, in forward
input = module(input)
File "/home/hx/anaconda3/envs/torchsparse/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/hx/PycharmProjects/torchsparse/torchsparse/nn/modules/conv.py", line 98, in forward
return F.conv3d(
File "/home/hx/PycharmProjects/torchsparse/torchsparse/nn/functional/conv/conv.py", line 92, in conv3d
kmap = F.build_kernel_map(
File "/home/hx/PycharmProjects/torchsparse/torchsparse/nn/functional/conv/kmap/build_kmap.py", line 85, in build_kernel_map
kmap = build_kmap_implicit_GEMM_hashmap_on_the_fly(
File "/home/hx/PycharmProjects/torchsparse/torchsparse/nn/functional/conv/kmap/func/hashmap_on_the_fly.py", line 48, in build_kmap_implicit_GEMM_hashmap_on_the_fly
func = torchsparse.backend.build_kernel_map_subm_hashmap
AttributeError: module 'torchsparse.backend' has no attribute 'build_kernel_map_subm_hashmap'

Error
Traceback (most recent call last):
File "/home/hx/PycharmProjects/torchsparse/tests/test.py", line 46, in test_to_dense
max_adiff = test_to_dense_forward()
File "/home/hx/PycharmProjects/torchsparse/tests/python/test_to_dense.py", line 48, in test_to_dense_forward
output = to_dense(feats_t, coords_t, spatial_range).cpu().numpy()
File "/home/hx/PycharmProjects/torchsparse/torchsparse/utils/to_dense.py", line 64, in to_dense
return ToDenseFunction.apply(feats, coords, spatial_range)
File "/home/hx/PycharmProjects/torchsparse/torchsparse/utils/to_dense.py", line 30, in forward
torchsparse.backend.to_dense_forward_cuda(
AttributeError: module 'torchsparse.backend' has no attribute 'to_dense_forward_cuda'

Expected Behavior

No error

Environment

- GCC:gcc (Ubuntu 11.4.0-2ubuntu1~20.04) 11.4.0
- NVCC:Cuda compilation tools, release 11.7, V11.7.99
- PyTorch:1.13.0+cu117
- PyTorch CUDA:11.7
- TorchSparse:2.1.0+torch113cu117

Anything else?

No response

Hi @huohuohuohuohuohuohuohuo . Seems like you haven't installed TorchSparse++ correctly. Have you solved the problem?

commented

I indeed installed TorchSprase++ successfully and no error is reported. I copied test_single_layer_conv.py and test_to_dense.py to another path and run them respectively and it worked.

Thank you for pointing that out. Generally, you can call these two *.py files directly by runing test.py in tests. And I have fixed some outdated import operations in the codebase.

I didn't observe the same error as yours. I guess the error you met might be related to your python environment. The torchsparse.backend may not be successfully linked at first.

Close this issue as completed. Feel free to reopen it if you have further questions.