mit-han-lab / torchsparse

[MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.

Home Page:https://torchsparse.mit.edu

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Does torchsparse support pooling blocks?

Tortoise0Knight opened this issue · comments

e.g. The implementation in MinkowskiEngine: https://nvidia.github.io/MinkowskiEngine/pooling.html#minkowskimaxpooling.
I only found global_max_pool()

Hi. We haven't implemented those pooling kernels yet. We will consider to implement them. Thank you for reaching out!

I also need average pooling for my application case and would appreciate it if you could implement this. Also, I would be happy if you could suggest a way to implement average pooling with convolutions. I thought using convolutions with all kernel elements = 1/N but N needs to be the number of active voxels inside the receptive field and I do not know how I can get that number.

Yes, that would be extremely useful. I was in the process of migrating my code for Minkowski, but sadly the lack of pooling layers make this impossible now.

@zhijian-liu @kentang-mit