SHI-Labs / Neighborhood-Attention-Transformer

Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

how to debug cuda kernel?

m1710545173 opened this issue · comments

Hello author, I want to change nattenqkrpb_cuda_forward_kernel to achieve the desired functions, but I don't know much about CUDA Programming and I don't know how to debug CUDA kernel. The programming tools i am using is the visual studio and the libtorch on Windows 10. Although i can debug some part of .cu file, i can't debug the cuda kernel. So, i want to know what tools and methods do you use to debug cuda programming? Please give me some suggestions!

Hello and thank you for your interest.
It really depends on what you mean by debugging.

  • When it's a compilation issue, you may find using setup.py better as the logs are a bit more concise.
  • If you want to check your implementation, you probably have to either come up with a pure python/pytorch implementation, and compare outputs of the two. That's what we typically do, which is send random tensors with different shapes through the CUDA version and the "torch version" with the same weights, compute the outputs, check if they allclose, and then do the same for the backward pass.
  • If you're sure of your forward pass being correct, gradcheck is probably a better way to check if your backward pass kernel is correct.
  • It's also not a bad idea to have assertions in place, even in the kernel when debugging, but I'd recommend leaving in as few assertions in the device code (the kernel) as possible and do assertions mostly before the kernel call.

If you're getting into optimization, you may find pytorch's profiler useful for measuring latency, and you'd probably need NVIDIA Nsight to profile in more detail.

I hope you find these useful, but if you need more details, please let us know.

Thank you very much for your suggestions! I will have a try.

Closing this due to inactivity. If you still have questions feel free to open it back up.