zdevito / ATen

ATen: A TENsor library for C++11

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Which AT_DISPATCH... file should be used?

unholdmx opened this issue · comments

I have a very hard time understanding, which AT_DISPATCH from here needs to be used in order to write tensors of different dtypes to a kernel.

While I understand, how #define AT_DISPATCH_FLOATING_TYPES(TYPE, NAME, ...) or #define AT_DISPATCH_INTEGRAL_TYPES(TYPE, NAME, ...) can be used when all tensors have the same dtype,
(here in a torch environment)

  AT_DISPATCH_FLOATING_TYPES(at::ScalarType::Float,"Some floating error message", [&] {
    some_cuda_kernel<scalar_t><<<blocks, threads>>>(
      some_float_tensor.packed_accessor<scalar_t,2,torch::RestrictPtrTraits,size_t>(),
     .... 
    );
    }
  );

all attempts to dispatch tensors when both tensors have different dtypes fail.