SciML / ExponentialUtilities.jl

Fast and differentiable implementations of matrix exponentials, Krylov exponential matrix-vector multiplications ("expmv"), KIOPS, ExpoKit functions, and more. All your exponential needs in SciML form.

Home Page:https://docs.sciml.ai/ExponentialUtilities/stable/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Handling of different exp methods

jarlebring opened this issue · comments

There are several matrix exponential methods in the literature, suitable in different settings. At the moment we have Padé-based scaling and squaring. There is the direct diagonalization approach, but also polynomial based scaling and squaring (which might be better in a GPU setting), e.g. polynomial based of Sastre http://personales.upv.es/~jorsasma/software/expmpol.m or https://arxiv.org/abs/2107.12198. If we add more implementations I think we need a way for a user to select this. Sketch of a system design using dispatch:

struct ExpMethodHigham 
    do_balancing::Bool
end
ExpMethodHigham()=ExpMethodHigham(true);
struct ExpMethodDiagonalization
end
struct ExpMethodBase # Call Base.exp
end

....

Implementations could be done like this

function _exp!(A,method::ExpMethodDiagonalization,cache) 
    F=eigen!(A);
    copyto!(A,F.vectors*Diagonal(exp.(F.values))/F.vectors)
    return A
end

Functions that need exp can take the method object as a kwarg

function phi(z::T, k::Integer; cache=nothing,expmethod=ExpMethodHigham())
....
    P = _exp!(A,expmethod,cache)

The non-allocating can be handled with dispatch

function allocate_mem(A,::ExpMethodHigham)
     return [similar(A) for i=1:5];
end
function allocate_mem(A,::ExpMethodDiagonalization)
     return nothing;
end

The function allocate_mem can either be called by the user or automatically if caches=nothing.

(We are working on other matrix exponential codes and this package would be a natural place to put them, otherwise we would need to create a separate package.)

I think that makes a lot of sense.