# NLsolve false convergence

Hi, I am a new user of Julia and this may sound like a basic question. I am trying to use NLsolve to find a solution to an equation of the form:
f(x)= ∑a0+a1 *1/(1+x)^(b[1])+a2 *1/(1+x)^(b[2])+ a3 *1/(1+x)^(b[3])+⋯
By a substitution of variable y = 1/(1+x), this can also be written as:
f(y)= ∑a0+a1 *y^(b[1])+a2 *y^(b[2])+ a3 *y^(b[3])+⋯
where a and b are constants

I set the show_trace to false, but it still displays the log. I just want it to return the value for the Zero if the algorithm has converged. Upon some experiments, I discovered that the algorithm sometime incorrectly display the status as converged when using the Anderson method. On some sets of data I also got Domain error when using the Anderson’s method (the form f(y) is more likely to give this error).

In short, I need help with:

• Only to get the result of the variable (x in the code) if the algorithm has converged without displaying the trace. I don’t want the program to throw an exception like Domain error, but something like #Num if it gets into numerical difficulties.
• What’s wrong with my code that the algorithm falsely claims convergence in some instances or I am misinterpreting the results?
using NLsolve
function my_func(x,a,b)
A= Vector{Float64}(undef,length(b))
for i in eachindex(b)
A[i] =  1/(1+x[1])^b[i]
end
return a'*A
end
function my_deriv(x,a,b)
A= Vector{Float64}(undef,length(b))
B= Vector{Float64}(undef,length(b))
for i in eachindex(b)
A[i] = 1/(1+x[1])^(b[i]-1)
B[i] = -b[i]*a[i]
end
return B'*A
end
function f!(F, x)
F[1] = my_func(x,a,b)
end
function j!(J, x)
J[1,1] = my_deriv(x,a,b)
end
a=[-10,1,1,11]
b=[0,1,2,3]
nlsolve(f!, j!, [ 0.0];method =:anderson,show_trace=false)  #The convergence flag is displayed as true though the Zero result is incorrect   * Zero: [-4.798376404676865e23]
nlsolve(f!, j!, [ 0.0];method =:newton,show_trace=false) # Gives the right answer


Hi, unfortunately nlsolve claims it is converged when the iterations do not make progress (and xtol==0).Thus, you should also (or only) check f_converged:

res = nlsolve(f!, j!, [ 0.0];method =:anderson,show_trace=false,beta=1.0,ftol=1E-10,xtol=1E-12)
all_converged(res) = res.x_converged && res.f_converged
@show all_converged(res)


The Anderson method stability depends on its parameters. Setting beta=0.1 seems to make it work. The Anderson method is probably not the best choice for your problem.

1 Like

That’s fabulous. Thank you

Terminating any iteration on small steps is risky in most cases. Consider

\sum_{n=1}^\infty 1/n

This series diverges. If you evaluate the sum numerically in order, starting with n=1, the sum will stagnate once 1/n is small enough for rounding effects to take over. That stagnation does not imply convergence.

Terminating a nonlinear solver on small residual norms is better. The quality of the residual norm as a surrogate for error depends on the conditioning of the Jacobian.

Thank you.

Do you mean something like this?

convergence_criteria(res) = res.residual_norm <= 1.0e-10
x _res = Float64
if convergence_criteria(res) == true
x_res = res.zero[1]
else x_res = "Error"
end


I’m not clear on what your code is doing. In any case, if you are solving a system of nonlinear equations

F(x) = 0

Then you’d terminate the iteration when \| F(x_n) \| < \tau_r \| F(x_0) \| + \tau_a. Here \tau_r and \tau_a are relative and absolute error tolerances. This is documented in

author="C. T. Kelley",
title="{Iterative Methods for Linear and Nonlinear Equations}",
publisher="SIAM",
year=1995,
series="Frontiers in Applied Mathematics",
number=16


and

2 Likes

Yes, it is solving for F(x) = 0

The code is in the OP with the last two lines replace by the code below. This applies the convergence based on norm of the residuals

res = nlsolve(f!, j!, [ 0.01];method =:anderson,show_trace=false,beta=1.0,ftol=1E-10,xtol=1E-12)
convergence_criteria(res) = res.residual_norm <= 1.0e-10
x _res = Float64
if convergence_criteria(res) == true
x_res = res.zero[1]
else x_res = "Error"
end