[ANN] KrylovKit:

I am pleased to announce KrylovKit.jl, a registered Julia package collecting a number of Krylov-based algorithms for linear problems, singular value and eigenvalue problems and the application of functions of linear maps or operators to vectors.

KrylovKit.jl accepts general functions or callable objects as linear maps, and general Julia objects with vector like behavior as vectors.

From the documentation overview:

There are already a fair number of packages with Krylov-based or other iterative methods, such as

KrylovKit.jl distinguishes itself from the previous packages in the following ways

  1. KrylovKit accepts general functions to represent the linear map or operator that defines
    the problem, without having to wrap them in a LinearMap
    or LinearOperator type.
    Of course, subtypes of AbstractMatrix are also supported. If the linear map (always the first
    argument) is a subtype of AbstractMatrix, matrix vector multiplication is used, otherwise
    is applied as a function call.

  2. KrylovKit does not assume that the vectors involved in the problem are actual subtypes of
    AbstractVector. Any Julia object that behaves as a vector is supported, so in particular
    higher-dimensional arrays or any custom user type that supports the following functions
    (with v and w two instances of this type and α a scalar (Number)):

    • Base.eltype(v): the scalar type (i.e. <:Number) of the data in v
    • Base.similar(v, [T::Type<:Number]): a way to construct additional similar vectors,
      possibly with a different scalar type T.
    • Base.copyto!(w, v): copy the contents of v to a preallocated vector w
    • Base.fill!(w, α): fill all the scalar entries of w with value α; this is only
      used in combination with α = 0 to create a zero vector. Note that Base.zero(v) does
      not work for this purpose if we want to change the scalar eltype. We can also not
      use rmul!(v, 0) (see below), since NaN*0 yields NaN.
    • LinearAlgebra.mul!(w, v, α): out of place scalar multiplication; multiply
      vector v with scalar α and store the result in w
    • LinearAlgebra.rmul!(v, α): in-place scalar multiplication of v with α.
    • LinearAlgebra.axpy!(α, v, w): store in w the result of α*v + w
    • LinearAlgebra.axpby!(α, v, β, w): store in w the result of α*v + β*w
    • LinearAlgebra.dot(v,w): compute the inner product of two vectors
    • LinearAlgebra.norm(v): compute the 2-norm of a vector
  3. To the best of my knowledge, KrylovKit.jl is the only package that provides a native Julia
    implementation of a Krylov method for eigenvalues of general matrices (in particular the
    Krylov-Schur algorithm). As such, is the only pure Julia alternative to Arpack.jl.


There is also https://github.com/haampie/ArnoldiMethod.jl but it doesn’t use the Krylov-Schur algorithm.

Great, thanks for the link. I see there is also a JacobiDavidson implementation on https://github.com/haampie/; I’ll add those links to the readme (and hope that forces can be combined in the future).

1 Like

Is it possible to make it work with types that are not mutable containers, so that they cannot support mul! etc?

I have not tested this, but in principle I always use the right hand side, so I tried to never write
rmul!(v, α) but rather always v = rmul!(v, α). This means that, if rmul!(v, α) is defined to just output v/α without actually working in place, everything might still work.

I will try to add some tests for that.


Nice @juthohaegeman :)! I’ll take a look at the source.

In https://github.com/haampie/ArnoldiMethod.jl the goal is to have a version that does not depend on LAPACK (only BLAS right now), I’ll let you know when that is done.

I only realized a few days ago that Krylov-Schur is just another way to implement implicitly restarted Arnoldi, so I’m embracing that and moving away from exact shift ideas of ARPACK (also Lehoucq told me he considered that best!). This means I’m implementing the LAPACK functions to reorder the Schur form in real and complex arithmetic. Maybe in the end this should go to GenericLinearAlgebra.jl so that you could use it as well :).

I also have a Julia implementation of the schur factorization of a Hessenberg matrix lying around (and related routines for reordering etc), but did not use it in KrylovKit.jl because for some matrices there were still convergence issues. I’d be happy to share what I have.

MATLAB actually also shifted away from ARPACK to Krylov-Schur, but I only learned that after I implemented it (not that it would have mattered). The book of Stewart (Matrix Algorithms Volume II) is a great reference.

Awesome! :slight_smile: I think the QR-algorithm I have is pretty much optimized and I’m still planning to make a PR to GenericLinearAlgebra.jl, but after the Arnoldi stuff is done. I know there is some literature on obscure cases where the QR-algorithm does not converge, but I’m not interested in every edge case at the moment. And w.r.t. reordering, I’m currently doing some tricks with StaticArrays.jl to solve tiny Sylvester equations with LU + complete pivoting that arise when swapping 2x2 blocks with 1x1 or 2x2 in the quasi upper triangular matrix – did you implement that already?

Yes but I just used tuples, because I did not want dependencies, and also, I think I started on this before StaticArrays were out and popular. Here is the code, but this dates back to Julia 0.6 (or even older):

1 Like