mit-han-lab / torchsparse

[MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.

Home Page:https://torchsparse.mit.edu

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Uncertainty Regarding the Usage of Tunner in torchsparse

zhangchenqi123 opened this issue · comments

commented

I am currently facing uncertainty about the usage of the "tunner" in the torchsparse codebase. Although I observed the import statement "from .utils.tune import tune" in the \torchsparse_init_.py file, I am unable to locate where the "tunner" is actually employed in the code. The mechanism through which it operates remains unclear to me.

Additionally, while exploring the \torchsparse\examples\example.py file, it seems that all instances of spnn.Conv3d ultimately utilize the "ImplicitGEMM" dataflow by default. And the Auto Tuner did not take effect to modify the dataflow for convolutional layers as mentioned in the torchsparse++ paper.

Could you kindly provide clarification or guidance on the aforementioned concerns? Understanding the utilization of "tunner" and the specifics of how "ImplicitGEMM" functions within the Conv3d instances would greatly assist me in comprehending the codebase better.

Thank you for your assistance.

Hi @zhangchenqi123 , thank you for your interest in TorchSparse!

The tuner is implemented in this file. It runs the sparse convolution model for several times to decide the backend configuration of sparse convolution kernels, including different dataflows and different kernel parameters.

In example.py, we didn't include the tuner for the sake of simplicity. Please refer to our document for the usage.

You can also see the example code of auto-tuner in our artifact benchmark code. It is in the #L180 of artifact-p2/evaluation/evaluation.py.

commented

Thanks a lot for your kindly reply!
I will try the tuner later.