Find zero of a nonlinear equation using Julia

After a process usyng the SymPy in Julia, I generated a system of nonlinear equations. For the sake of simplicity, I am going to put an approximation here for the case of just a non-linear equation. What I get is something like this equation:

R = (p) -> -5.0488*p + p^2.81 - 3.38/( p^(-1.0) )^2.0

I can plot the R function

using Plots
plot(R, 0,8)

We can see that the R function has two zeros: p = 0 and 5.850< p < 8.75. I would like to find the positive zero. For this, I tryed the nlsolve function but with error:

using NLsolve
nlsolve(R , 5.8)

MethodError: no method matching nlsolve(::var"#1337#1338", ::Float64)
Closest candidates are:
nlsolve(::Any, ::Any, !Matched::AbstractArray; inplace, kwargs...)

First, Where am I going wrong with the nlsolve function?

If possible, I will appreciate a solution using SymPy package in Julia.

mmm, if i remember correctly, nlsolve is for systems of more than variable, requiring a vector as an imput. the example in their documentation is that they accept a function of the form:

function f!(F, x)
    F[1] = (x[1]+3)*(x[2]^3-7)+18
    F[2] = sin(x[2]*exp(x[1])-1)
end

where F is a vector storing the output and x is the input. your function could be written as:

function R(F,p) 
F[1] = -5.0488*p + p^2.81 - 3.38/( p^(-1.0) )^2.0
end
nlsolve(R , [5.8])

Have you tried Roots.jl? I’ve never used NSolve. But Roots worked for me for a similar problem.

This does not work. I think nlsolve works perfectly for one equation only.

Yes, it works well, but I put just one equation for simplicity. In fact, I have N X S equations and N X S incognitas.

my function was written incorrectly, this runs without problem and finds a zero

function R(F,p) #p is a vector too, not a number
F[1] = -5.0488*p[1] + p[1]^2.81 - 3.38/( p[1]^(-1.0) )^2.0
end
nlsolve(R , [5.8])

the results are:

julia> nlsolve(R , [5.8])
Results of Nonlinear Solver Algorithm
 * Algorithm: Trust-region with dogleg and autoscaling
 * Starting Point: [5.8]
 * Zero: [5.934184191445043]
 * Inf-norm of residuals: 0.000000
 * Iterations: 3
 * Convergence: true
   * |x - x'| < 0.0e+00: false
   * |f(x)| < 1.0e-08: true
 * Function Calls (f): 4
 * Jacobian Calls (df/dx): 4
2 Likes

it really works!

1 Like

In the root-finding domain, I also want to highlight IntervalRootFinding.jl. It aims to find all the roots in the given interval (0..2 is the interval from 0 to 2), with a guarantee:

julia> using IntervalRootFinding, IntervalArithmetic

julia> R = (p) -> -5.0488*p + p^2.81 - 3.38/( p^(-1.0) )^2.0
#1 (generic function with 1 method)

julia> roots(R, -1e100..1e100)
2-element Vector{Root{Interval{Float64}}}:
 Root([5.93418, 5.93419], :unique)
 Root([-4.44703e-08, 2.6707e-08], :unknown)

#only positive roots

julia> roots(R, 0..1e100)
2-element Vector{Root{Interval{Float64}}}:
 Root([0, 6.89354e-08], :unknown)
 Root([5.93418, 5.93419], :unique)

The :unknown state of the root means that there are 0, 1, or multiple roots in the domain. Manual checking can be done after to investigate the root:

julia> found_roots = roots(R, 0..1e100)
2-element Vector{Root{Interval{Float64}}}:
 Root([0, 6.89354e-08], :unknown)
 Root([5.93418, 5.93419], :unique)

julia> fieldnames(Root)
(:interval, :status)

# finding the output interval for a given input interval:
julia> found_roots[1].interval |> R
[-3.48041e-07, 7.51661e-21]

julia> R(0)
0.0

# see value for the smallest positive incriment:
julia> R(eps())
-1.1210588013454983e-15

# See of the sign is the same when moving a little further from 0:
julia> R(eps()*10)
-1.1210588013454998e-14

# See value for sligthly negative values:
julia> R(-eps())
ERROR: DomainError with -2.220446049250313e-16:
Exponentiation yielding a complex result requires a complex argument.
Replace x^y with (x+0im)^y, Complex(x)^y, or similar.

But of course, we found that R(0) is a solution. What comes after is more for fun, and to show how one might conduct the investigation.

3 Likes

Definitely!
Interval methods produce rigorous bounds robust to round-off errors. In particular, the IntervalRootFinding.jl package is able to separate and bound all the zeros of the function on a given interval. The proof of existence and unicity of a single root within a sub-interval exploits the interval Newton method.

2 Likes

For polynomials, maybe PolynomialRoots.jl is the most relevant.

PolynomialRoots.jl works well, because it shows imaginary numbers, but it requires polynomials to be entered as a list. It would be much better if it allowed functions to be entered as functions.

The list representation is way better for solving. If you use a function, you can’t recover the coefficients.

I’m not sure about that. It seems like an extra step.

Consider a quadratic. From the coefficients you can just apply the quadratic formula, but with the function form, you won’t know that you have a quadratic and will have to use a generic nonlinear solve.

Ok, that makes sense. Is there an NL solver that can find imaginary numbers?