While I am clearly partial to solve
style syntax (just look at JuliaDiffEq), it doesn’t hit the necessary points already mentioned in this thread. To summarize, a good API for this would need to:
- Be able to easily swap between different packages (and Base).
- Be able to “initialize”. This is important for dense solvers since being able to save the factorization can lead to large speedups, and for things like GPU code being able to do the data transfer once is essential.
- Have an easy in-place syntax.
- Be able to pass parameters.
- Not add dependencies.
The problem with solve
’s syntax is that it does not do 2 well.
Example
To better situate the problem and show the advantages/disadvantages of various setups, let’s have an example problem. This example stems from Rosenbrock ODE solvers, but it’s a pretty common pattern throughout applied math. I would like to be able to write a function that solves a linear system three times, changing the right-hand side each time, and allowing the user to choose the method. Using the “God type” method of @stevengj / PETSc.jl (“God type” naming from @jiahao 's issue. I am not sure of a good name for it because it’s not always a factorization, but I will use the word factorization) , I can do this as follows:
function OneStep(f,...,linear_solver=lufact)
...
A = Jacobian(f(...)) # A comes from some Jacobian
K = linear_solver(A) # Build the factorization object
... # Some calculations
@into! x1 = K\b1 # First linear solve
... # use x1 a bit
@into! x2 = K\b2 # Second linear solve
... # use x2 a bit
@into! x3 = K\b3 # third linear solve
... # Do some stuff to end
end
First off, let me mention that @into!
is from InplaceOps.jl and just changes the linear solves to be an in-place operation via A_ldiv_B!
, so it’s just clean syntax. But let’s actually see what that does. If one uses a dense matrix solver like lufact
or qrfact
, then this algorithm will solve the factorization once, and then re-use it in all 3 cases. This is then an efficient code for dense solvers. But in many cases, a Jacobian may be sprase. If someone passes in
OneStep(f,...,(A) -> KSP(A, ksp_type="gmres", ksp_rtol=1e-6))
then all of the linear solving places are replaced by PETSc.jl’s GMRES, with no modification to the code necessary.
Now let’s see how other packages could really build off of this. Take @kslimes 's CUSOLVER.jl. If this package provided an interface which had such types and dispatches, then the following could occur. The type generation could be some function like
linear_solve = (A) -> CudaSolve(A,csrlsvlu(),abstol=1e-6)
and which could send the (hopefully) sparse matrix to the GPU and save it as a sparse matrix in a CUDA format. Then \ (and A_ldiv_B!
) can be overloaded to
- Send the vector to the GPU (if the vector is not a GPU vector. This “if” is clearly dispatch)
- Compute
\
using the chosen algorithm.
- If it started on the CPU, send the resulting vector back to the CPU.
Now, when a user does
OneStep(f,...,(A) -> CudaSolve(A,csrlsvlu(),abstol=1e-6))
then, without any modifications to OneStep
, the code will do the linear solves on the GPU and only send the Jacobian to the GPU a single time.
Then if Pardiso.jl had an object setup which then just calls solve, or MUMPS.jl, or …, then all of these codes can easily be exchanged. Of course, you’d have to check the different packages for how to specify the tolerances each time (since they could name them literally anything), but in the end you can use anything. The package which has the OneSteps
function, in this setup, does have to depend on any of the linear solvers and so the user can give it any function which produces a type which can \
.
The problem with solve
is I don’t see how it can match this two-step approach in an easy manner and get the same efficiency. It can get lots of these things right, but the initialization is really crucial for many linear solving setups (not for iterative solvers, but just thinking about CPU-based iterative solvers is really close-minded here). And I don’t see why you’d want to avoid : MATLAB had some bad syntax choices, but \ is one choice that I find to be very clear and elegant.