Apart from `IntervalRootFinding.jl`

, is there a way to “help” the solvers by providing bounds on the unknown variables? eg is there a way to inform `NLsolve.jl`

or `NonlinearSolvers.jl`

that `x[i] >= 0`

so that the solver does not look in that region?

No, but you can rewrite `f`

to enforce it through change of variables formulae

Not recommended if you can avoid it through e.g. change of variables as mentioned, but you could add something like `x[i] = (y[i])^2`

or `x[i] = exp(y[i])`

for a free real variable `y[i]`

.

Yes, I know those options. But what about providing the solver a box in which I know the zero is located?

EDIT: Now that I think about it, I could modify a logistic curve to map the reals to an interval.

How effective is to add penalty terms to the residuals of the system? something like:

```
function f!(res, x)
res[1] = f(x) + 100*(x[1] > 0)
res[2] = g(x) + 100*(x[2] > 0)
end
```

Those penalties have zero derivative so they won’t naturally pull things back into the good set.

I don’t necessarily know if (aside from looking at interval methods) it makes sense to say that you’re “helping” the algorithm in any way. You can project, etc to enforce it, but the bounds don’t really help any method that I can think of if we’re talking about multivariate problems. If you’re thinking about something like bisection/Brent’s method for root finding or some other univariate method that starts with a bracket/interval and shrinks it, sure, but otherwise no. There’s no guarantee that imposing a box known to contain the solution (by projection for example) will make something like Newton’s method converge in fewer iterations. It might actually get stuck trying to go beyond the boundary to look for a root/zero of a function, and end up stopping without finding a root if you do so.

I see, but there is a case for it enforcing bounds to prevent the algorithm from going into regions in which the system is not defined. For example if the system has logarithms, we don’t want the algorithm to try negative numbers.

In chemical physics, the volume fraction, \phi, is strictly in the range [0,1]. And it is very common that we have equations containing terms like \log{\phi} or \log{(1-\phi)}. It is really helpful if we can tell the algorithm that we should really stay in the range [0,1], otherwise, the algorithm will simply fail due to the evaluation of \log if a bad initial value is provided. I have many root finding problems which really need algorithms free of being afraid of this issue.

I don’t know why parameterizing variables is not recommended, but you can parameterize `ϕ`

with a logistic transform that maps the reals to [0, 1]:

```
logistic(x) = inv(1 + exp(-x))
function system(x)
ϕ = logistic(x)
f(ϕ)
end
```

The only problems I can imagine is that the logistic map is not very sensitive for values far away from 0.

Sure, if you just want it to stay in that region you can check before you evaluate, and if it’s outside return NaN of some other non-finite value. For the bad initial values, you can control that as a user by moving it inside the interval if you were otherwise going to start outside.

If \phi is not to close to 1, your solution should be fine. The risk is that you will push x to \infty and get near to singularity. The best way to tell is to do the experiment and see what happens for youur problem.

You could also reformulate the problem as a bound constrained nonlinear least squares problem. The danger there is you could wind up at a local minimum on the boundary.