NLsolve.jl for different input and output dimensions?

Hi all,

I am currently trying out NLsolve.jl to find roots of given functions, which works great at the moment, but I realized that I can’t seem to find a way to specify a dimension for the output different from the input… Is there a way to do so, or are there other packages that allow this?

Thanks!

Hi, as far as I know, NLsolve.jl solves equations of the form

0=f_1(x_1, x_2, x_3, \ldots)\\ 0=f_2(x_1, x_2, x_3, \ldots)\\ 0=f_3(x_1, x_2, x_3, \ldots)\\ \ldots

If the number of equations does not match the number of variable x_i your system is either under- or overdetermined. Are you certain the problem you are looking at is actually a root-finding problem? (or perhaps an optimization?)

3 Likes

Actually, I know a solution is not unique, but in a set of dimension at least k ; that is why I would like to solve n+k equations, to get the local unicity again. I don’t know if this is the more appropriate way though.

I tried making changes of coordinates to drop dimensions, but this didn’t work well in my particular case.

NLsolve will only give you point solutions and therefore requires k=0.

I don’t know of a package which would give you the entire solutions set for k>0, however:

You could add some cost function to indicate which point solution (out of the complete solution set) you are interested in. Then solve it as an optimization problem with your n equations as side conditions.

EDIT: Or if you just want any solution you can just append k zeros to the output and NLsolve will hopefully converge to one solution out of your k-dimensional set.

1 Like

You might want to try this out: Interval constraint programming tutorial

What motivates your need? What problem do you want to solve?

2 Likes

Characterising the entire solution set is a big part of the problem indeed… I was counting on NLsolve to find at least one root. I thought of the cost function, but doesn’t it require (if the equations f_j(x_i) = 0 are side conditions) the initial point to satisfy them already ?
As for the k zeros, my problem is overdetermined, so I should add k free variables to the input maybe. Didn’t try this one out, I will give it a shot.

I will look at this, thanks !

I need to find the roots of a function in \mathbb{R}^{18} (I can put bounds, so that I have [-a; a]^{18} instead), but as I said, the roots are actually in continuous sets, and the function is costly to evaluate (about a second per call). I looked into NOMAD.jl, (blackbox optimization) but it didn’t succeed, so I am currently trying other methods.

If the problem is overdetermined (but you are certain a solution exists) you could add some free variables to match the number of conditions?

1 Like

I looks like you could formulate this as a nonlinear least squares problem.

\min F(x), \mbox{ where } F(x) = \sum f_i^2(x)

and use an optimizer. If the input dimension is N and the output is M, the M > N is the overdetermined case and, while you may not have a solution, the optimizer will tell you something. In the underdetermined (M < N) case a least squares code would likely find one solution.

I am not clear on what kind of problem you have. I see that N = 18. Is M = 18-k?

I think you should try a nonlinear least squares code before trying interval methods or direct search methods like Nomad because gradient-based methods work better (most of the time) if your problem is smooth.

2 Likes

I am in the overdetermined case (M = 18 + k > N), I add equations that a solution must satisfy. I will try a least squares method then !

Just fyi, overdetermined means more equations than variables.

Underdetermined means more variables than equations.

1 Like

You should try to find a solver that does not add the artificial k equations, unless the added equations are not artificial and mean something for the problem. The risk in adding equations is that you can turn a problem with a zero residual into a problem without one.

2 Likes

So I tried using this, but NLsolve doesn’t converge in this case for my problem. Currently trying it out on smaller test cases to see what’s going on.

Because of the problem’s symmetries, I am sure that if a solution exists, it will belong to a continuous family of solutions. I tried changing variables to drop some and get the unicity this way, but it dramatically slows down the process - no clue why.

(I should maybe add that the function I am minimizing uses TaylorIntegration’s taylorinteg, and thus has no analytical form)