Solving a system of two equations with two variables with certain properties

I wondered if there is any more robust and faster way to solve a system of two equations with two variables than the NLsolve package.

The first equation is F(x,y)=0 and the second G(x,y)=0. Both of them are very complicated functions. However, what I know about them are as follows:

  1. For any (x,y(x)) such that F(x,y_{F}(x))=0, y_{F}(x) is a decreasing function. In other words, if x increases, y must decrease to satisfy F(x,y)=0.

  2. On the other hand, for G(x,y_{G}(x))=0, y_{G}(x) is an increasing function.

Given these two properties, I know that there can be at most one solution but not the existence of a solution. There are a few more parameters that can make the non-existence of a solution.

Ideally, I could plot y_{F}(x) and y_{G}(x) and find the crossing point. Alternatively, I could use NLsolve package, but I have no clue how the algorithm searches for the root, especially when a solution does not exist.

Any comments or suggestions would be tremendously helpful.

That is a super interesting question.

I will answer about NLSolve and the algorithms it uses.

In this case, you want to find [x,y] such that [F(x,y), G(x,y)] = 0. But for a moment, let’s focus on only finding f(x) = 0 for some arbitrary function f.

This is what is called a root-finding problem. The easiest and most naive method for solving it is the so-called bisection method.

If you know that f(a) > 0 and f(b)<0 and f is a line connecting f(a) and f(b), it must necessarily cross the y-axis so you check what is the value at c = (a+b)/2 if f(c)<0 then you can make the same argument for a and c, otherwise for c and b. You do this over and over again until you find c such that f(c) = 0, or in practice, |f(c)| < than a certain tolerance.

This type of problem arises often, and hence there exist many methods which are ‘robust and fast’. NLSolve lets you use some of the best ones.

From a pedagogical perspective, I strongly encourage you to code your method. The Newton method is the best to start your problem.

You ask two essential questions. Weather faster and more robust methods exist. The short answer is no. The longer answer is that it depends on what your problem functions F and G are specifically. Most likely, they are simple, and the methods for solving them are good enough. In general, the computation should take ms, if not less.

To conclude,

what happens when there are no solutions? Nothing, the method does not converge, and NLSolve will tell you so.

Let me know if this answers your question, and don’t hesitate to ask!

Thank you very much for the detailed and thoughtful suggestions.

The reasons I was a bit hesitant to use NLsolve were two :

  1. One of the functions F was a fixed point function depending on x and y. NLsolve sometimes picks a value that fails the convergence of the fixed point algorithm.

  2. Maybe related to that previous point, my problem seemed sensitive to initial conditions. I should try methods other than the true-region.

As you suggested, I’m trying to code my way for a solution. First, I collect several points that satisfy F(x,y)=0 and interpolate them. Then, I give a try one-by-one if G(x,y)=0 condition is met.

Just to clarify.

  1. Yes, in general you cannot promise that the solution will be found unless the function F is ‘nice enough’. I do not know what methods did you use on NLSolve, perhaps if you choose BFGS/Newton it should be better.

  2. Trust-Region is a good algorithm overall. What it does essentailly is that approximates the function by a simpler one and tries to move towards a root there. Since the apporximation is very bad, it can only be trusted locally - hence trust region, the region you trust your approximation.

The algorithm you suggest does not seem very efficient. Note that in requires you to check all the points. Maybe you should try one of the more canonical ones.