In Base, this routine is called A_ldiv_B!
You’re absolutely right that initialization is necessary, in Trilinos the “A” is also something called a “linear operator with solve”, which is typically an initialized factorization, so it would present the same workflow, except that there is the extra parameters option for passing parameters you might want to vary at each step and that don’t require recomputing the initialized operator, maybe things like the tolerance or the number of iterations.
That said, I like your example and it would fit my future use case (solving the Navier-Stokes equations) well, but I think it would be nice to also have a more advanced syntax that allows passing in extra parameters at each step if needed.
How standard is the @into!
syntax? It is much nicer than having to remember a non-intuitive ordering of function arguments, I think it would be good to have macros like this in Base so they are more universally recognised.
We’d have to try to standardize some arguments if we want that to work nicely. Can you give an example of what you’re thinking of?
@into!
is just a nice way from InplaceOps.jl to use A_ldiv_B!
. A_ldiv_B!
is standardized. I prefer the \ look with a macro in front. Pretty sure this macro isn’t standard, but I would hope it would be become so.
I was thinking of just adding an extra argument parameters
with an abstract type (abstract SolveParameters
?), so using multiple dispatch it could be specialized to contain anything.
What would that give us over the method using “God types” and closures? That’s the example I’m looking for.
You could set e.g. tolerance at each step:
function OneStep(f,...,linear_solver=lufact)
...
A = Jacobian(f(...)) # A comes from some Jacobian
K = linear_solver(A) # Build the factorization object
... # Some calculations
@into! x1 = K\b1 # First linear solve, default settings from factorization object
... # use x1 a bit
solve!(x,K,b2,MaxIterations(10)) # Suppose the second linear solve just needs an approximation, stop after 10 iterations
... # use x2 a bit
solve!(x,K,b3,Transpose(true)) # third linear solve uses the transpose of A
... # Do some stuff to end
end
The same could probably be achieved by modifying A
, but this would be trickier if the solutions could happen in parallel. In general, I think it’s prudent to have a provision for passing parameters related to the application of A
for a specific RHS but that are not directly a property of A
itself.
What about just requiring that every factorization object implements an update!
function?
The only thing required to make this work well would be a PR to Base convincing them that a blank dispatch for update!
would make this code much easier to write (and maybe make a new spot on the “Interfaces” page of the documentation about this interface).
(I Realize I wrote A
instead of K
in my previous post, will correct after sending this)
This does not address the case (a bit far fetched, admittedly, and not something I require in my code) where the solves would happen in parallel: different parameterizations of K
would require making copies. Also, conceptually things like the number of iterations and the tolerance aren’t really a property of the factorization object, but of a specific application of that object, so I feel it makes sense to be able to set these parameters when they matter (allowing of course for defaults to be set in K
, to allow the \
variant to work).
Regarding update!
: How about a parameters
function that gets parameters that can be manipulated using setindex!
:
params = parameters(K)
setindex!(params, 1e-6, :tolerance)
The name update!
seems a bit ambiguous, it could also be interpreted as “I changed the matrix, update the factorization”, for example.
Also, is it necessary to have all the interface for this in Base? Couldn’t a package describing the interface be created? It should then be easier to make changes if necessary.
If we want to use \ and A_ldiv_B!
(which I still think is sensible), we would only need a minimal amount of Base compatibility (i.e. just ways for it to ignore the parameters instead of throwing an error when they’re there).
Yes, the parameters should always be optional, with defaults configurable in K
. I agree that \ and A_ldiv_B!
should stay, of course, but the solve!
interface would be a nice addition to add a little more flexibility. I must admit I never would have thought to look for a function named A_ldiv_B!
, would it be a totally crazy idea to rename that to solve!
?
I’m not a huge fan of the naming scheme, but it’s consistent with A_mul_B!
, Ac_mul_B!
, etc. The notation solve!
on its own is not as versatile.
In parallel to the discussion on parameters, I think we should bring up the discussion of linear maps. Not every linear system that is to be solved is a matrix, in many cases one might want to solve with a linear operator or a linear map. Generalizing it like this computationally good because things like matrix-free methods can be much faster and/or take much less memory.
IterativeSolvers.jl seems to want to go this direction by using LinearMaps.jl
I think it would be nice if more packages try to support at least something like linear maps. If I understand correctly, a WrappedMap
is just a matrix wrapped as a linear map, so I assume that the package handles the cases to make this work anywhere a matrix \ would work (does it?).
Then the push would really be about trying to get other libraries to support something like the FunctionMap
where you define a function for A*x
. It’s clear that iterative solvers “can” work with this, but what packages can be made compatible with something like this? Do things like CUSOLVER and PETSc strictly require a matrix?
I’m confused why LinearMaps.jl doesn’t do the following:
abstract AbstractLinearMap{T} <: AbstractMatrix{T}
getindex(A::AbstractLinearMap,k::Integer,j::Integer) = (A*[zeros(j-1);1;zeros(size(A,2)-j)])[k]
An update on this: the factorization-type style has been implemented in DifferentialEquations.jl. I tested it out, it works really nicely as a cross package solution. While it’s only currently enabled on master, docs are already up which show how to specify it:
http://docs.juliadiffeq.org/latest/features/linear_nonlinear.html
I’ll see if I get around to putting some PRs around to make this work with more packages. All that needs to happen is the type interface like PETSc.jl needs to be created. There’s a thread going in IterativeSolvers.jl for this:
https://github.com/JuliaMath/IterativeSolvers.jl/issues/99
I’ll also be poking around for packages which should use this interface, like NLsolve.jl.
https://github.com/EconForge/NLsolve.jl/issues/83
In total, it’s really painless to setup but is a really cool feature for Julia. Thanks everyone for the ideas!
@ChrisRackauckas Re. your earlier question, PETSc requires a matrix object. Matrix objects can have different formats, including various flavors of sparse storage formats and MatShell, which doesn’t require an explicit matrix but instead requires a callback function that performs matrix-vector products.