# Julia implementation of the Interpolate, Truncate, Project (ITP) root finding algorithm

I have been unable to find any real code implementation of ITP method - Wikipedia. However, it claims to be the bee’s knees - just read the first paragraph of the Wikipedia article:

“In numerical analysis, the ITP method , short for Interpolate Truncate and Project , is the first root-finding algorithm that achieves the superlinear convergence of the secant method while retaining the optimal worst-case performance of the bisection method. It is also the first method with guaranteed average performance strictly better than the bisection method under any continuous distribution. In practice it performs better than traditional interpolation and hybrid based strategies (Brent’s Method, Ridders, Illinois), since it not only converges super-linearly over well behaved functions but also guarantees fast performance under ill-behaved functions where interpolations fail.

I implemented the pseudocode in the Wikipedia article, and thought I should share it. If anyone wants to put this in a package like `Roots.jl`, `NLSolve.jl` or `NonlinearSolve.jl`, feel free to - I’d be thrilled.

EDIT: provided defaults, and changed argument order to make having defaults more sensible.
EDIT 2: Put the function argument first, diverging further from the Wikipedia pseudocode, but converging towards the conventional (and sensible) ordering. Also added reference to the source of hyperparameter defaults.

``````"""
ITP(f, a, b, ϵ=eps(), κ₁=0.2/(b-a), κ₂=2, n₀=1)

Find the root of the funtion `f` on the interval [a, b], given a
tolerance ϵ and hyperparameters κ₁, κ₂, n₀. The hyperparameters are
subject to the following constraints:

0 < κ₁ < Inf
1 ≤ κ₂ < 1 + (1 + √5)/2
0 < n₀ < Inf

A Julia implementation of the Interpolate, Truncate Project algorithm,
as detailed in `https://en.wikipedia.org/wiki/ITP_method`.
Hyperparameter defaults are set on the basis of the reasoning provided
in `https://docs.rs/kurbo/0.8.1/kurbo/common/fn.solve_itp.html`
"""
function ITP(f, a, b, ϵ=eps(), κ₁=0.2/(b-a), κ₂=2, n₀=1)
0 < κ₁ < Inf             ||   error("κ₁ must be between 0 and ∞")
1 ≤ κ₂ < 1 + (1 + √5)/2   ||   error("κ₂ must be between 1 and 1+ (1 + √5)/2 (1 + the golden ratio)")
0 < n₀ < Inf             ||   error("n₀ must be between 0 and ∞")
n_1div2 = ceil(Int, log2((b-a)/2ϵ))
nₘₐₓ = n_1div2 + n₀
y_a = f(a)
y_b = f(b)
sign(y_a) == sign(y_b)  &&  error("sign(f(a)) = sign(f(b)). There is no guaranteed root in the given interval.")
j = 0
while b-a > 2ϵ
# Calculating parameters:
x_1div2 = (a+b)/2
r = ϵ*2^(nₘₐₓ - j) - (b-a)/2
δ = κ₁*(b-a)^κ₂

# Interpolation:
x_f = /(y_b*a - y_a*b, y_b-y_a)

# Truncation:
σ = sign(x_1div2 - x_f)
δ ≤ abs(x_1div2 - x_f) ? (x_t=x_f+σ*δ) : (x_t = x_1div2)

# Projection:
abs(x_t - x_1div2) ≤ r ? (x_ITP = x_t) : (x_ITP = x_1div2 - σ*r)

# Updating Interval:
y_ITP = f(x_ITP)
if y_ITP > 0
b = x_ITP
y_b = y_ITP
elseif y_ITP < 0
a = x_ITP
y_a = y_ITP
else
a = b = x_ITP
end
j += 1
end
return (a+b)/2
end
``````

One could concider adding the following constraint:
` ϵ ≤ big(1e-78) && error("A tolerance this small has lead to indefinite iterations in the past. Aborting.")`

5 Likes

this looks great! I would recommend giving defaults for the hyper-parameters though.

2 Likes

I completely agree, defaults should be provided. From the wikipedia article, it seems like the rate of convergance is larger for lager kappa_2, so it should probably default to the maximum value.

The others, I have no idea what good defaults are, so I will not make any until I have a feeling for what is generally good.

1 Like

Glad you did this. I had a similar thought last year. Could you perform a quick comparison with Brent from Optim.jl?

1 Like

Sorry, but I implemented this while I should have been doing something else, and I do not plan on stealing even more time to work with it I don’t have anything but the code in the OP, so it should be easy to test it for anyone who wants to.

1 Like

From Wikipedia paper in reference `a` :
Kurbo library solve_itp function

The ITP method has tuning parameters. This implementation hardwires k2 to 2, both because it avoids an expensive floating point exponentiation, and because this value has been tested to work well with curve fitting problems.

The n0 parameter controls the relative impact of the bisection and secant components. When it is 0, the number of iterations is guaranteed to be no more than the number required by bisection (thus, this method is strictly superior to bisection). However, when the function is smooth, a value of 1 gives the secant method more of a chance to engage, so the average number of iterations is likely lower, though there can be one more iteration than bisection in the worst case.

The k1 parameter is harder to characterize, and interested users are referred to the paper, as well as encouraged to do empirical testing. To match the the paper, a value of 0.2 / (b - a) is suggested, and this is confirmed to give good results.

3 Likes

Thanks. I will update the OP.

2 Likes

An implementation for `Roots` is in this PR

4 Likes