# Release: NLsolve 3.0.0

Consider this a PSA about an important breakign change that NLsolve 3.0.0 ships with.

Upon running METADATA’s CIBot we found out that the breaking change in NLsolve 3.0.0 did indeed cause some robotics packages to fail, so please verify in your private code that you are not using NLsolve in a way that will break before updating.

So what is the breaking change?

The breaking change is one that has been asked for be many users, and relates to the discussion around mutation of the residual vector of the system you’re solving with `nlsolve` (or the convenience function `fixedpoint` for fixedpoints). Consider the following

``````using NLsolve

julia> f(F, x) = copyto!(F, (log(x[1]), log(x[2])+1.0)

f (generic function with 1 method)
julia> nlsolve(f, [1.5, 0.5])
Results of Nonlinear Solver Algorithm
* Algorithm: Trust-region with dogleg and autoscaling
* Starting Point: [1.5, 0.5]
* Zero: [1.0, 0.367879]
* Inf-norm of residuals: 0.000000
* Iterations: 4
* Convergence: true
* |x - x'| < 0.0e+00: false
* |f(x)| < 1.0e-08: true
* Function Calls (f): 5
* Jacobian Calls (df/dx): 5

``````

(phew, it worked)

Some users, in some circumstances, find the mutating interface annoying. It’s mainly there for performance issues, but what if your calculations inside the function outweigh such considerations on the solver side and you think it makes for ugly code? You’d rather write

``````f(x) = log.(x) .+ [0.0, 1.0]
nlsolve(f, [1.5, 0.5])
``````

With NLsolve you can, as we’ll try to detect if a method is available of `f` that accepts the `x0` guess passed to `nlsolve`. Consider the following (notice I changed the shift to be negative)

``````julia> f(x) = log.(x) .- [0.0, 1.0]
f (generic function with 2 methods)

julia> nlsolve(f, [1.5, 0.5])
Results of Nonlinear Solver Algorithm
* Algorithm: Trust-region with dogleg and autoscaling
* Starting Point: [1.5, 0.5]
* Zero: [1.0, 2.71828]
* Inf-norm of residuals: 0.000000
* Iterations: 6
* Convergence: true
* |x - x'| < 0.0e+00: false
* |f(x)| < 1.0e-08: true
* Function Calls (f): 7
* Jacobian Calls (df/dx): 7
``````

Before, this would require the user to have written

``````nlsolve(f, x0; inplace = false)
``````

This automatic detection does come at a small price. If you have both inplace and outofplace versions of `f` (so two methods of `f`), you now have to opt-in for the mutating method to be used (remember that had a shift of +1 to the second element)

``````julia> nlsolve(f, [1.5, 0.5]; inplace=true)
Results of Nonlinear Solver Algorithm
* Algorithm: Trust-region with dogleg and autoscaling
* Starting Point: [1.5, 0.5]
* Zero: [1.0, 0.367879]
* Inf-norm of residuals: 0.000000
* Iterations: 4
* Convergence: true
* |x - x'| < 0.0e+00: false
* |f(x)| < 1.0e-08: true
* Function Calls (f): 5
* Jacobian Calls (df/dx): 5
``````

Of course, in most cases this can be avoided by following the `f!` convention often used in Julia.

Happy solving and remember to give a star if you use and like NLsolve!
pkofod

7 Likes

Is that much more convenient than

``````f!(F, x) = @. F = log(x) + [0.0, 1.0]
``````

?

This was a very contrived example, you might want to imagine a whole range of non-mutating operations that end with returning some vector.

I would never use this interface, but some people do use Julia to show numerical examples in classes or elsewhere, and don’t care about these things. The just want to write `f(x) = stuff_with_x(x)` and if it’s easy to accommodate, I’m fine with it. Since I got involved in Optim and NLsolve, people have been asking for an easy way to do this, so…

Makes sense, thanks for explaining.

1 Like

I think this is an improvement, and I wish more packages made “functional” interfaces the default and mutating a pre-allocated buffer opt-in.

The reason for this is that when using generic methods, I find it very tricky to determine the element type I want to pre-allocate for containers. All of the ways I know of are rather brittle and cumbersome, especially when using multiple AD packages simultaneously.

I understand the need to economize on allocations, but for the algorithms I work with, saving an allocation at the outermost levels has negligible improvements on speed.

3 Likes

This could just as well have been direct copy of many of the other requests I’ve gotten. The other main issue is it’s much easier to facilitate lecturing on (and learning in/of(?)) an actual subject (that is not Julia programming) the more the code looks like the math.

Either way, if people think this is an improvement I’m happy to have it (I intend to do something similar in Optim, but actually I will just pull all this back into NLSolversBase, but that’s a detail). I just wanted to have this to refer to if people didn’t understand why their code failed, and they happened to have two methods for their function like described above!

1 Like