SciML / ExponentialUtilities.jl

Fast and differentiable implementations of matrix exponentials, Krylov exponential matrix-vector multiplications ("expmv"), KIOPS, ExpoKit functions, and more. All your exponential needs in SciML form.

Home Page:https://docs.sciml.ai/ExponentialUtilities/stable/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Support for CUDA Arrays

albertomercurio opened this issue · comments

Hello,

I'm interested on using these tools with CUDA Arrays like ComplexF64 dense CuArray or sparse CuSparseMatrix. Following the documentation, it only needs that the array supports the following functions:

  • Base.eltype(A)
  • Base.size(A, dim)
  • LinearAlgebra.mul!(y, A, x)
  • LinearAlgebra.opnorm(A, p=Inf) (I can insert manually this value)
  • LinearAlgebra.ishermitian(A)

To prove that this is the following code:

test = CUSPARSE.CuSparseMatrixCSC(sprand(ComplexF64, 100, 100, 0.01))
if Base.eltype(test) == ComplexF64
    println("OK")
end
if Base.size(test, 2) == 100
    println("OK")
end
dy = CUDA.zeros(ComplexF64, 100)
y = CUDA.rand(ComplexF64, 100)
LinearAlgebra.mul!(dy, test, y)
test2 = Array(test)
dy2 = Array(dy)
y2 = Array(y)
LinearAlgebra.mul!(dy2, test2, y2)
if Array(dy) == dy2
    println("OK")
end

But I can't calculate for example the function expv that it returns the error

expv(1.0, test, y, opnorm = opnorm(test2))

Scalar indexing is disallowed.
Invocation of getindex resulted in scalar indexing of a GPU array.
This is typically caused by calling an iterating implementation of a method.
Such implementations *do not* execute on the GPU, but very slowly on the CPU,
and therefore are only permitted from the REPL for prototyping purposes.
If you did intend to index this array, annotate the caller with @allowscalar.

The same problem happens for the dense matrix case.

Closed by #101