michael-lehn / FLENS

Flexible Library for Efficient Numerical Solutions

Home Page:http://apfel.mathematik.uni-ulm.de/~lehn/FLENS/index.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Any plan to add back the support of sparse matrix/vector?

byzhang opened this issue · comments

I remember the old version does support sparse matrix.

Yes, I just started with porting them back!

Nice!
Thanks,
-B

On Wed, Aug 15, 2012 at 12:50 PM, Michael Lehn notifications@github.comwrote:

Yes, I just started with porting them back!


Reply to this email directly or view it on GitHubhttps://github.com//issues/2#issuecomment-7767099.

I will go on with extending functionality (e.g. sparse-blas, iterative solvers, ...) and testing of sparse matrices in the next weeks.

I can also add some very basic sparse vectors. However, those will be really experimental. I am really not familiar what kind of storage schemes are required for performance. Both (storage scheme and performance) are closely related to the kind of application. Just like for sparse matrices.

Hi Michael,

Appreciate it!

Some math expression libraries take sparse vector as a specialized sparse
matrix, omitting the row array in CSR format.

Thanks,
-B

On Wed, Aug 22, 2012 at 2:43 PM, Michael Lehn notifications@github.comwrote:

I will go on with extending functionality (e.g. sparse-blas, iterative
solvers, ...) and testing of sparse matrices in the next weeks.

I can also add some very basic sparse vectors. However, those will be *
really* experimental. I am really not familiar what kind of storage
schemes are required for performance. Both (storage scheme and performance)
are closely related to the kind of application. Just like for sparse
matrices.


Reply to this email directly or view it on GitHubhttps://github.com//issues/2#issuecomment-7953094.

Yes, I saw that when I was "asking google" about sparse matrices. Such a format is certainly a good default for most tasks.

So at the moment we have the following flavors of sparse matrices:
(1) General matrices with CoordStorage, CRS, CCS
(2) Symmetric matrices with CoordStorage, CRS, CCS
(3) Hermitian matrices with CoordStorage, CRS, CCS

  • For sparse matrices mit CRS/CCS format the matrix-vector product (with DenseVector) is implemented.
  • For matrices with CoordStorage these could easily be added, but IMHO there's useful application for that.
  • These matrix types can also be used for interfacing with direct sparse solvers like SuperLU or PARDISO.

There are several options to go on:
(A) Triangular sparse matrices and a sparse version of TRSV (solving Lu=b, Rx=u were L, R are sparse and triangular). This could be interesting if one wants to reuse L,U factorized by SuperLU.
(B) Add sparse block-matrices, i.e. sparse matrices were elements are small nb x nb matrices were nb is a constant. This could be interesting for FEM for higher order elements.
(C) Add sparse matrices that support views. For compressed row/column storage this would require a slightly different format with 4 arrays instead of 3 arrays.
(D) Sparse vectors.

About (C) and (D): So far I have not had an application/need for this.

Especially for (D) we should make a list of relevant BLAS-type operations first. I don't like an implementation that can handle each and every possible case, but performs poor for the few cases that are most relevant in practice. For example: for axpy "y <- y + alpha*x" the cases
(i) y is a dense vector, x is a sparse vector
(ii) x and y are both sparse vectors
would require different implementations. Furthermore, if only (i) is needed we only implement (i). So what is needed is a complete list of BLAS-type functions that need to be supported. This list should in particularly identify the type of involved matrices/vectors. Note that once a BLAS-type function is available the mapping of overloaded operators comes for free.

Thanks! And also appreciate the updated the document.