I am pleased to announce KrylovKit.jl, a registered Julia package collecting a number of Krylov-based algorithms for linear problems, singular value and eigenvalue problems and the application of functions of linear maps or operators to vectors.
KrylovKit.jl accepts general functions or callable objects as linear maps, and general Julia objects with vector like behavior as vectors.
From the documentation overview:
There are already a fair number of packages with Krylov-based or other iterative methods, such as
- 
IterativeSolvers.jl: part of the
 JuliaMath organisation, solves linear systems and least
 square problems, eigenvalue and singular value problems
- 
Krylov.jl: part of the
 JuliaSmoothOptimizers organisation, solves
 linear systems and least square problems, specific for linear operators from
 LinearOperators.jl.
- KrylovMethods.jl: specific for sparse matrices
- Expokit.jl: application of the matrix exponential to a vector
KrylovKit.jl distinguishes itself from the previous packages in the following ways
- 
KrylovKit accepts general functions to represent the linear map or operator that defines 
 the problem, without having to wrap them in aLinearMap
 orLinearOperatortype.
 Of course, subtypes ofAbstractMatrixare also supported. If the linear map (always the first
 argument) is a subtype ofAbstractMatrix, matrix vector multiplication is used, otherwise
 is applied as a function call.
- 
KrylovKit does not assume that the vectors involved in the problem are actual subtypes of 
 AbstractVector. Any Julia object that behaves as a vector is supported, so in particular
 higher-dimensional arrays or any custom user type that supports the following functions
 (withvandwtwo instances of this type andαa scalar (Number)):- 
Base.eltype(v): the scalar type (i.e.<:Number) of the data inv
- 
Base.similar(v, [T::Type<:Number]): a way to construct additional similar vectors,
 possibly with a different scalar typeT.
- 
Base.copyto!(w, v): copy the contents ofvto a preallocated vectorw
- 
Base.fill!(w, α): fill all the scalar entries ofwwith valueα; this is only
 used in combination withα = 0to create a zero vector. Note thatBase.zero(v)does
 not work for this purpose if we want to change the scalareltype. We can also not
 usermul!(v, 0)(see below), sinceNaN*0yieldsNaN.
- 
LinearAlgebra.mul!(w, v, α): out of place scalar multiplication; multiply
 vectorvwith scalarαand store the result inw
- 
LinearAlgebra.rmul!(v, α): in-place scalar multiplication ofvwithα.
- 
LinearAlgebra.axpy!(α, v, w): store inwthe result ofα*v + w
- 
LinearAlgebra.axpby!(α, v, β, w): store inwthe result ofα*v + β*w
- 
LinearAlgebra.dot(v,w): compute the inner product of two vectors
- 
LinearAlgebra.norm(v): compute the 2-norm of a vector
 
- 
- 
To the best of my knowledge, KrylovKit.jl is the only package that provides a native Julia 
 implementation of a Krylov method for eigenvalues of general matrices (in particular the
 Krylov-Schur algorithm). As such, is the only pure Julia alternative toArpack.jl.
 I think the QR-algorithm I have is pretty much optimized and I’m still planning to make a PR to GenericLinearAlgebra.jl, but after the Arnoldi stuff is done. I know there is some literature on obscure cases where the QR-algorithm does not converge, but I’m not interested in every edge case at the moment. And w.r.t. reordering, I’m currently doing some tricks with StaticArrays.jl to solve tiny Sylvester equations with LU + complete pivoting that arise when swapping 2x2 blocks with 1x1 or 2x2 in the quasi upper triangular matrix – did you implement that already?
 I think the QR-algorithm I have is pretty much optimized and I’m still planning to make a PR to GenericLinearAlgebra.jl, but after the Arnoldi stuff is done. I know there is some literature on obscure cases where the QR-algorithm does not converge, but I’m not interested in every edge case at the moment. And w.r.t. reordering, I’m currently doing some tricks with StaticArrays.jl to solve tiny Sylvester equations with LU + complete pivoting that arise when swapping 2x2 blocks with 1x1 or 2x2 in the quasi upper triangular matrix – did you implement that already?