AtheMathmo / rulinalg

A linear algebra library written in Rust

Home Page:https://crates.io/crates/rulinalg

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Moore–Penrose pseudo-inverse of Matrix

regexident opened this issue · comments

I was trying to port some code to Rust that required computing the pseudo-inverse of a matrix and had to realize that rulinalg currently does not provide such functionality.

According to Wikipedia the pseudo-inverse of a matrix can be implemented using singular value decomposition (SVD), which is already available in rulinalg.

I found some Coffeescript impl (for complex matrices) on Github:

cpinv = (A) ->
    {U, S, V} = csvd A
    tol = max(size(A))*nm.epsilon*S[0]
    Z = ((if x>tol then 1/x else 0) for x in S)
    V.dot(nm.diag(Z)).dot(U.transjugate())

Attempting to port the code to rulinalg (stripping it down to work with plain old real values), I got stuck at the use of dot in the last line:

impl<T: Any + Float + Signed + FromPrimitive> Matrix<T> {
    pub fn pseudo_inverse(self) -> Result<Matrix<T>, Error> {
        self.svd().map(|(s, u, v)| {
            let max = T::from_usize(::std::cmp::max(self.cols(), self.rows())).unwrap();
            let s_zero = s.get_unchecked([0, 0]).clone();
            let epsilon = max * T::epsilon() * s_zero;
            let z = s.diag().apply(&|x| if x > epsilon { T::one() / x } else { T::zero() });
            v.dot(z).dot(u.transpose())
        })
    }
}

Any idea how the use of dot would translate to rulinalg (or if I'm on the right track to begin with)?

I haven't done any background reading into the topic here- but I imagine that the dot function translates to a matrix multiplication. For this you can just use the * operator:

v * z * u.transpose()

I'll try to read over the topic and code properly at some point if that doesn't help!

Thanks for the prompt response, AtheMathmo!

Yeah, that's what I came up with, too. What tripped me up with though was the z-vector:

(v: Matrix) * (z: Vector) * (u.transpose(): Matrix)

There is no vector * matrix that I know of.


This is how numpy does it, btw:

def pinv(a, rcond=1e-15 ):
    a, wrap = _makearray(a)
    _assertNoEmpty2d(a)
    a = a.conjugate()
    u, s, vt = svd(a, 0)
    m = u.shape[0]
    n = vt.shape[1]
    cutoff = rcond*maximum.reduce(s)
    for i in range(min(n, m)):
        if s[i] > cutoff:
            s[i] = 1./s[i]
        else:
            s[i] = 0.
    res = dot(transpose(vt), multiply(s[:, newaxis], transpose(u)))
    return wrap(res)

But then again I'm not familiar with Python (let alone numpy) and that s[:, newaxis] alone was impossible to google, let alone the code generation.

And this is scipy:

def pinv2(a, cond=None, rcond=None, return_rank=False, check_finite=True):
    a = _asarray_validated(a, check_finite=check_finite)
    u, s, vh = decomp_svd.svd(a, full_matrices=False, check_finite=False)

    if rcond is not None:
        cond = rcond
    if cond in [None, -1]:
        t = u.dtype.char.lower()
        factor = {'f': 1E3, 'd': 1E6}
        cond = factor[t] * np.finfo(t).eps

    rank = np.sum(s > cond * np.max(s))

    u = u[:, :rank]
    u /= s[:rank]
    B = np.transpose(np.conjugate(np.dot(u, vh[:rank])))

    if return_rank:
        return B, rank
    else:
        return B

It looks to me that the latter examples are computing zTU, so they treat it like a matrix multiplication with z as a row. I could be misunderstanding though...

PR #122 has been closed pending work on SVD. This PR should be revisited once the SVD issues have been resolved.

Hi. Are there any updates on this? I find myself also in need of the Moore-Penrose pseudo-inverse (for Tsai camera calibration), and while I could try to repeat this approach and try to implement it myself based on other versions of it around (besides Numpy, the C++ library Armadillo also has an implementation for example), I'm very much not an expert on such matters, and would prefer to leave the details in the capable hands of those with a much better idea of what they're doing.

Hi @Jarak-Jakar! Unfortunately both @AtheMathmo and I have both been very busy lately, and so unfortunately not much has been happening in rulinalg. From my side, I hope to be able to spend some time on it again soon (I guess I've been saying that for some time already...), but I can't make any promises.

The crux of the matter is that our SVD implementation needs some work. I've been working my way through the decompositions. The last one I had the time for was the upper Hessenberg decomposition in #179. My plan - once I hopefully find the time - is to continue with the real Schur decomposition for computing real-valued eigenvalues of general square matrices, before continuing onto the symmetric eigenvalue decomposition and finally to SVD last. So it might still take some time.

Hi @Andlon. Thanks for your very quick reply :) I certainly understand not having time to work on everything!

I think I will try implementing a version of the Moore-Penrose pseudo-inverse using rulinalg (admittedly, I will probably inadvertently end up replicating the one already proposed), and examine the results - I may compare them to the results produced by Numpy's and/or Armadillo's version perhaps. If I'm getting unexpected results, I'll post back here for your future reference when you do get the time to work on SVD.

I would offer to help with SVD, but I'm afraid that's outside my area of knowledge when it comes to mathematics.

Sounds good, @Jarak-Jakar! Keep us posted :-)

Hi @Andlon and @AtheMathmo. Believe it or not, I haven't forgotten about this. I have just plain been too busy to be able to spare the time to work on it (I'm doing postgraduate study these days - I haven't even had a chance to work on what I planned to use Moore-Penrose for since my last post). Unfortunately, I don't see that changing in the foreseeable future. So, I hate to go back on my promise, but I'm afraid I'll have to say that I'm not really expecting to complete the comparison any time soon. Sorry to over-promise and under-deliver. Please do keep up the good work team! :)