Teaching Scientific Computing with Julia

Notice that the function eigs has been moved to the package Arpack.jl. The problems I have faced with eigs occur when using shift-and-inverse. Sometimes you get errors with the factorization step that is used to solve the linear problems. And there is no way to change the factorization used.

ArnoldiMethod.jl and KrylovKit.jl do not implement shift-and-inverse, but it is not easy to adapt them to do so. See for example Spectral transformations section in the docs of ArnoldiMethod.jl. However, it would be nice to have something ready out-of-the-box.

Between ArnoldiMethod.jl and KrylovKit.jl, from experience with a MSc student (computing some bandstructures from a tight-binding Hamiltonian), we found that KrylovKit.jl was more accurate (we would get “noisy” bands when using shift-and-inverse with ArnoldiMethod.jl, for the eigenvalues further away from the target). The downside of KrylovKit.jl is that it does not support in-place operations.

As I said in another post I think that for such a high level operation, such as plotting, being capable of switching backends is not such a great advantage. If you really want to fine-tune the look of a plot (and you always end up wanting to do so), you end up running into the issue of not all backends having the same features, and having to use backend dependent options. At that point, it is just better to use directly the backend.

On the other hand, for other kinds of problem, such as eigenproblems or linear problems, I think that being capable of easily changing backends/solvers is actually a useful thing. You might be writing code to solve a general problem where the eigen/linear-problem is just one of the steps. However, there might not be an ideal solver. It might depend on the size of the problem you are dealing, or you might want to easily spawn the solver for testing. So being capable of passing a solver as a function argument really makes things easy. In this respect, I think that Floops.jl does this really well.

I am not fully convinced that transforming

x = A\b

into

prob = LinearProblem(A, b)
sol = solve(prob)
x = sol.u

Is a great idea for beginners, but I guess one can write it as:

x = solve(LinearProblem(A, b)).u

which is a one-liner, but still a bit verbose.
For me a notation like:

x = linearsolve(A, b, method)

seems more natural (before Julia I mostly used Mathematica). But I guess that when the method needs to use a cache, maybe the best option is really to use something like problem-init-solve.