I need to solve a (smooth, unconstrained) system of nonlinear equations for an unknown defined on multiple subdomains (to wit, domain decomposition for a PDE). A natural representation of the unknown might be a vector of arrays of different sizes.

My first thought was that I would need to give a true Vector to the nonlinear solver and then view it as the subdomain-aware version for my stuff. Something like a PseudoBlockVector with a resize operation within each subdomain. Doable, but I never look forward to making multiple views into the same data without creating unwanted copies (or failing to create needed copies, or deep copiesâŚ).

Then I thought a cooler Julian solution would be to subtype AbstractVector for my representation and generalize +,-,length, and so on, so that the nonlinear solver would never have to know it wasnât working with a true Vector.

Does anyone know if the NLsolve or Optim packages, or another not on my radar, can handle that? And what the âand so onâ set of necessary vector operations would be?

You donât need to make it an AbstractVector, only a subset of the methods is required for a newton solve. For example in my package PseudoArcLengthContinuation.jl, I am relying on the same set of methods as KrylovKit.jl. It provides newton method and others for ``generalââ vectors.

You can also provide you own linear solver if you wish.

Again, that was my first thought, and it should be fine to start. Eventually, though, I might want different parts of the vector to live on different processes. Wouldnât that leave me stuck if the solver wants to, say, add vectors together, unless it is willing to accept an AbstractVector in the first place?

@rveltz, it looks like KrlovKitâs description of vector-like behavior is the kind of thing Iâm after. Your PALC might fit the bill, then. It looks like nice work! I must say that after reading your front-page disclaimer about performance, I was very surprised by the extent to which you seem to defy it in your examples.

It could be but I would not worry too much about this. For example: see DArray{T,N,A} <: AbstractArray{T,N} in DistributedArrays.jl. See also (WIP) MPIArray.jl which gives an AbstractArray. Knowing a bit where youâre headed would help you: for example, do you want cluster of GPUs, cluster of CPU, multithreadingâŚ

For example, using AbstractArray allows some performance improvement by calling BLAS (search in KrylovKit about this).

When I first wrote PseudoArcLengthContinuation.jl, I was using ArrayFire.jl, KrylovKit.jl and ApproxFun.jl, thatâs why the package does not require AbstractArray.
I donât use much IterativeSolvers.jl because it lacks an eigensolver, it is otherwise a wonderful package.

Well, I do researching with this tool studying some challenging PDEsâŚ So I guess I toned down too much the disclaimer about performance. Most of the good performance is squeezed out of the linear solver thoughâŚ

That said, @ChrisRackauckas explains very well what such type of ``simpleââ package performs well in this post. Also, if you are interested in PALC, you should check the excellent Bifurcations.jl

KrylovKit.jl is for linear problems and eigenvalue problems, and related iterative methods. For non-linear optimization problems, I was also working on OptimKit.jl , but itâs not registered and still work in progress (though the basic functionality is working). It goes a bit further than just relying you to overload dot and axpy! and these kind of functions for your custom vector-like type (which does not need to be <:AbstractVector or <:AbstractArray). You can just specify which method it has to use for those so your function for adding âvectorsâ can even have a different name, and you donât need to go through the trouble of defining a new custom type. If the parameters of your optimization problem are best captured as say a tuple of a scalar, a matrix, and some other thing, you can just use that, i.e. x=(scalar, matrix, otherobj) and then define functions for adding and scaling these, and for computing inner products.