# How to catch this error?

I am optimizing a complicated function. That function depends on 2 variables `t1` and `t2`. I can derive first order conditions that result in a system of 2 equations. I can also derive conditions for when `t1 < t2`, `t1 > t2`, and `t1 == t2`. Obviously, only one of those conditions will contain the optimum. The following program is MWE, ignoring the case `t1 == t2` for simplicity:

``````using Roots

# if t1 < t2
function foc12!(t; x=10.0)
t = x
t2star = find_zero(t2 -> (t2 - 12.0)^3, (t, 30.0), Bisection(); rtol=1e-10)
t = t2star
return t
end

# if t2 < t1
function foc21!(t; x=10.0)
t = x
t1star = find_zero(t1 -> (t1 - 21.0)^3, (t, 30.0), Bisection(); rtol=1e-10)
t = t1star
return t
end

function obj(t; x=10.0)
0.25*(t - x)^4 + 0.25*(t - x)^4
end

function fullsol!(t; x=10.0)
sol12 = foc12!(t, x=x)
sol21 = foc21!(t, x=x)

truezero = [Inf, Inf]
val = -Inf
for i in [sol12, sol21]
tryval = obj(i; x=x)
if tryval > val
truezero = i
val = tryval
end
end

t .= truezero
return t, val
end
``````

The problem arises with say `x = 15.0`, when `foc12!` throws an error:

``````julia> t = [1.0, 1.0]

julia> fullsol!(t; x=15)
ERROR: ArgumentError: The interval [a,b] is not a bracketing interval.
You need f(a) and f(b) to have different signs (f(a) * f(b) < 0).
Consider a different bracket or try fzero(f, c) with an initial guess c.
``````

That means that the only valid case is when `foc21!` yields the optimal values. However, I still want to be able to try all cases for other values of `x` in which both `t1 < t2` and `t2 < t1` are feasible. How can I do that?

I replaced `fullsol!` with this one:

``````function fullsol2!(t; x=10.0)
sol12 = [NaN, NaN]
sol21 = [NaN, NaN]

try
sol12 = foc12!(t, x=x)
catch
sol12 = [NaN, NaN]
end

try
sol21 = foc21!(t, x=x)
catch
sol21 = [NaN, NaN]
end

truezero = [Inf, Inf]
val = -Inf
for i in [sol12, sol21]
tryval = obj(i; x=x)
if tryval > val
truezero = i
val = tryval
end
end

t .= truezero
return t, val
end
``````

is there a better way?

Let me try to clarify:

You have a parameter x, a function f1(y; x) and another function f2(y; x) (the FOCs).
You know from theory that there is exactly one `y*` for which either `f1(y*; x) == 0` or `f2(y*; x) == 0` (but not both) (over the interval `y* in [x, 30]`).
You are trying to find this unique `y*`. Did I get that right?

I am assuming `f1` and `f2` are not monotone (otherwise it would be easy), but they are guaranteed to have a unique root.

One option: If you have a good idea of the shapes of `f1` and `f2`, evaluate them on a grid. If you find a sign change, run `find_zero` on that interval.
Another option: Use `find_zeros` instead (though I’m not sure what it does when no zero is found; also probably inefficient).

@hendri54 thanks for taking the time to read this convoluted writeup. Let me try to clarify the problem.

I have a function `N()` to maximize that depends on `t1` and `t2`, and some parameters `x` (you can think of these as covariates). I can try to optimize `N` directly, but there is a theoretical case when the optimal `t1` and `t2` are exactly equal to each other. I am unsure whether a generic optimization algorithm can guarantee that. Another concern is that optimizing `N` directly with a generic optimization algorithm would be slow.

So the approach I take is to derive FOC assuming 3 cases for `t1` and `t2`: when `t1 < t2` (call this foc function `foc12`), when `t2 < t1` (call this foc function `foc21`), and when `t1 == t2` (I omit this case in the mwe above). I thought I could solve for the zeros of each of those functions and then evaluate those zeros of `foc12` and `foc21` on `N` to obtain the “true” global maximizer (that is picking the point when `N` is larger among the zeros of all focs). Notice that the approach just described presumes that there are zeros for all focs. The problem I encountered is that there are some values of parameters `x` for which some of the focs do not have zeros. That means that for the given case, say for `t1 < t2`, there is no such local optimum for `N` on that interval, which means the zero of `foc21` (when `t2 < t1`) is the global maximizer. So I am looking for a way circumvent the error I get when trying to find the zeros of a foc when no such zeros exist. The function `fullsol2!` accomplishes that, but it looks very clunky to me. I am looking if there exists another way of accomplishing the same thing.

OK - then let’s go back to my proposals.

If the FOCs are monotone: easy.

If they are not: are you sure there is at most one root for each case?
If not: `find_zeros` would be needed instead of `find_zero`.

If yes: I would use a form of grid search to find one interval where the sign changes. How to implement this depends on what you know about the shape of the focs. Then run `find_zero` on that interval.

1 Like

BTW, I should have linked this writeup of the problem from another post: Status of integration, optimization, and nl solvers - #14 by amrods

Well, `Bisection` (and other bracketing methods) throws the error when you don’t have a bracketing interval. You can check this before the call (avoiding try-catch) with 2 function evaluations (ie. is f(a)*f(b) < 0, which if costly to compute, the values may be reused in the bisection call). You might have a zero still in that interval, even if the interval isn’t bracketing. (Though not if the function is monotone.) You might also try running `find_zero(f, x)` starting from some initial point (not an interval). That may return values outside the interval you are constraining though. What isn’t available with `Roots`, but is with `IntervalRootFinding`, is the ability to confirm if a bracketing interval contains a unique root.

1 Like