UndefRefError: access to undefined reference, when I use NLsolve.jl

I am new to Julia and the Julia community. I’d appreciate it if someone can help me with this error. I have also opened an issue in the package NLsolve.jl.

I tried to use variables and parameters with type Float64 and auto differentiation withautodiff=: forward. Because the package ForwardDiff.jl requires variables to be of type Real, see here, I changed the initial guess to be of type Real, with parameters remain of type Float64. In this case, the function f! can be evaluated(as in the MWE), while the nlsolve doesn’t work.

Here is the error information.

ERROR: UndefRefError: access to undefined reference
Stacktrace:
 [1] getindex at ./array.jl:788 [inlined]
 [2] seed! at /Users/pearson/.julia/packages/ForwardDiff/cXTw0/src/apiutils.jl:59 [inlined]
 [3] seed! at /Users/pearson/.julia/packages/ForwardDiff/cXTw0/src/apiutils.jl:58 [inlined]
 [4] vector_mode_dual_eval(::var"#1071#1072", ::Array{Real,1}, ::Array{Float64,1}, ::ForwardDiff.JacobianConfig{ForwardDiff.Tag{var"#1071#1072",Real},Real,1,Tuple{Array{ForwardDiff.Dual{ForwardDiff.Tag{var"#1071#1072",Real},Real,1},1},Array{ForwardDiff.Dual{ForwardDiff.Tag{var"#1071#1072",Real},Real,1},1}}}) at /Users/pearson/.julia/packages/ForwardDiff/cXTw0/src/apiutils.jl:43
 [5] vector_mode_jacobian!(::DiffResults.MutableDiffResult{1,Array{Real,1},Tuple{Array{Float64,2}}}, ::var"#1071#1072", ::Array{Real,1}, ::Array{Float64,1}, ::ForwardDiff.JacobianConfig{ForwardDiff.Tag{var"#1071#1072",Real},Real,1,Tuple{Array{ForwardDiff.Dual{ForwardDiff.Tag{var"#1071#1072",Real},Real,1},1},Array{ForwardDiff.Dual{ForwardDiff.Tag{var"#1071#1072",Real},Real,1},1}}}) at /Users/pearson/.julia/packages/ForwardDiff/cXTw0/src/jacobian.jl:164
 [6] jacobian! at /Users/pearson/.julia/packages/ForwardDiff/cXTw0/src/jacobian.jl:74 [inlined]
 [7] (::NLSolversBase.var"#fj_forwarddiff!#24"{var"#1071#1072",ForwardDiff.JacobianConfig{ForwardDiff.Tag{var"#1071#1072",Real},Real,1,Tuple{Array{ForwardDiff.Dual{ForwardDiff.Tag{var"#1071#1072",Real},Real,1},1},Array{ForwardDiff.Dual{ForwardDiff.Tag{var"#1071#1072",Real},Real,1},1}}},Array{Real,1}})(::Array{Real,1}, ::Array{Float64,2}, ::Array{Float64,1}) at /Users/pearson/.julia/packages/NLSolversBase/mGaJg/src/objective_types/oncedifferentiable.jl:160
 [8] value_jacobian!!(::OnceDifferentiable{Array{Real,1},Array{Float64,2},Array{Float64,1}}, ::Array{Real,1}, ::Array{Float64,2}, ::Array{Float64,1}) at /Users/pearson/.julia/packages/NLSolversBase/mGaJg/src/interface.jl:124
 [9] value_jacobian!! at /Users/pearson/.julia/packages/NLSolversBase/mGaJg/src/interface.jl:122 [inlined]
 [10] newton_(::OnceDifferentiable{Array{Real,1},Array{Float64,2},Array{Float64,1}}, ::Array{Real,1}, ::Int64, ::Float64, ::Int64, ::Bool, ::Bool, ::Bool, ::Static, ::NLsolve.var"#27#29", ::NLsolve.NewtonCache{Array{Float64,1}}) at /Users/pearson/.julia/packages/NLsolve/ZBTu4/src/solvers/newton.jl:48
 [11] newton(::OnceDifferentiable{Array{Real,1},Array{Float64,2},Array{Float64,1}}, ::Array{Real,1}, ::Int64, ::Float64, ::Int64, ::Bool, ::Bool, ::Bool, ::Static, ::NLsolve.NewtonCache{Array{Float64,1}}; linsolve::Function) at /Users/pearson/.julia/packages/NLsolve/ZBTu4/src/solvers/newton.jl:134
 [12] nlsolve(::OnceDifferentiable{Array{Real,1},Array{Float64,2},Array{Float64,1}}, ::Array{Real,1}; method::Symbol, xtol::Int64, ftol::Float64, iterations::Int64, store_trace::Bool, show_trace::Bool, extended_trace::Bool, linesearch::Static, linsolve::NLsolve.var"#27#29", factor::Int64, autoscale::Bool, m::Int64, beta::Int64, aa_start::Int64, droptol::Float64) at /Users/pearson/.julia/packages/NLsolve/ZBTu4/src/nlsolve/nlsolve.jl:23
 [13] nlsolve(::Function, ::Array{Real,1}; method::Symbol, autodiff::Symbol, inplace::Bool, kwargs::Base.Iterators.Pairs{Symbol,Real,Tuple{Symbol,Symbol},NamedTuple{(:ftol, :show_trace),Tuple{Float64,Bool}}}) at /Users/pearson/.julia/packages/NLsolve/ZBTu4/src/nlsolve/nlsolve.jl:52
 [14] top-level scope at none:0

Here is the MWE.

using NLsolve

function f!(res::AbstractArray,guess::AbstractArray,param::Float64)
    x = similar(guess)
    x[1] = guess[1]
    res[1] = x[1]^param-param*x[1]+1
    return res
end

# Initialize initial guess
guess = Array{Real,1}(undef,1)
guess[1] = 5.0
# Parameter
param = 2.0
# First evaluate(this works)
res = similar(guess)
init_f = f!(res,guess,param)
# Solve the problem(this goes with the error)
result = nlsolve((res,guess) -> f!(res,guess,param),
    guess,
    ftol = 1e-6,
    method = :newton,
    autodiff = :forward,
    show_trace = true,
    )

Thank you in advance.

I think the error is in line 49 of nlsolve.jl .

df = OnceDifferentiable(f, initial_x, similar(initial_x); autodiff=autodiff, inplace=inplace)

If the initial_x is of type Array{Real} , then similar(initial_x) returns an undefined array.

The mistake is on your side. You put type constraint that should not be. First you can remove the AbstractArray on the definition of f!, it does nothing (very little actually) so why bother?

More importantly, you should change guess = Array{Real,1}(undef,1) to guess = Array{Float64,1}(undef,1) or better do guess = [0.5].

This is a classic mistake. When you define guess, you don’t give it a concrete type and even force it to be very loosely typed over the reals.

Then you get

julia> # Solve the problem(this goes with the error)
       result = nlsolve((res,guess) -> f!(res,guess,param),
           guess,
           ftol = 1e-9,
           method = :newton,
           autodiff = :forward,
           show_trace = true,
           )
Iter     f(x) inf-norm    Step 2-norm 
------   --------------   --------------
     0     2.500000e-01              NaN
     1     6.250000e-02     6.250000e-02
     2     1.562500e-02     1.562500e-02
     3     3.906250e-03     3.906250e-03
     4     9.765625e-04     9.765625e-04
     5     2.441406e-04     2.441406e-04
     6     6.103516e-05     6.103516e-05
     7     1.525879e-05     1.525879e-05
     8     3.814697e-06     3.814697e-06
     9     9.536743e-07     9.536743e-07
    10     2.384186e-07     2.384186e-07
    11     5.960464e-08     5.960464e-08
    12     1.490116e-08     1.490116e-08
    13     3.725290e-09     3.725290e-09
    14     9.313226e-10     9.313226e-10
Results of Nonlinear Solver Algorithm
 * Algorithm: Newton with line-search
 * Starting Point: [0.5]
 * Zero: [0.999969482421875]
 * Inf-norm of residuals: 0.000000
 * Iterations: 14
 * Convergence: true
   * |x - x'| < 0.0e+00: false
   * |f(x)| < 1.0e-09: true
 * Function Calls (f): 15
 * Jacobian Calls (df/dx): 14

However, I am now super surprised by the bad convergence of this newton run, it does not seem very quadratic too me…

2 Likes

Thank you for your suggestion.

Actually, I am handling a large scale nonlinear system with 6300 equations. Should I remove the AbstractArray as well?

By the way, I know that assigning the guess with a Float64 directly works in this case, but it does not work in my whole problem. As I have mentioned above, I encountered the problem in ForwardDiff.jl so I tried to use initial guess that of type Real. Or do I misunderstand the tip 3 here? If I force the guess to be of type Real, then the error is thrown.

I guess you misunderstand. It is fine to put type constraint in function definitions but they should be loose enough for AD to work. In your case I would remove all type constraint in f!. This is what your link is referring too. If you force f! to accept only real-valued inputs for example, it would error Forwarddiff because it wants to use Dual.

The second mistake is that x::Array{Real, 1} is “harmful” because it forces the compiler not to use a concrete type (like Array{Float64, 1}), and as said in the docs Real is an abstract supertype. It is (or I ) mostly used for dispatch by putting type constraint in the arguments of functions.

2 Likes

I am sorry about the imprecision before.

Previously, I simply fed guess that of type Float64. This was my type constraint in the function definition and it seems like no restrictions have been put. I have also tried to get rid of all constraints but the error goes the same.

function dynamic_problem!(
    res_dynamic::AbstractArray, # residual 
    guess_dynamic::AbstractArray, # guess
    init_dynamic::NamedTuple, # initial conditions
    exos_dynamic::NamedTuple, # exogenous variables
    params_dynamic::NamedTuple, # parameters
    )

The error goes:

ERROR: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{var"#32#33",Float64},Float64,12})
Closest candidates are:
  Float64(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
  Float64(::T) where T<:Number at boot.jl:715
  Float64(::Int8) at float.jl:60
  ...
Stacktrace:
 [1] convert(::Type{Float64}, ::ForwardDiff.Dual{ForwardDiff.Tag{var"#32#33",Float64},Float64,12}) at ./number.jl:7
 [2] setindex! at ./array.jl:828 [inlined]
 [3] macro expansion at ./multidimensional.jl:786 [inlined]
 [4] macro expansion at ./cartesian.jl:64 [inlined]
 [5] macro expansion at ./multidimensional.jl:781 [inlined]
 [6] _unsafe_setindex!(::IndexLinear, ::Array{Float64,3}, ::Array{ForwardDiff.Dual{ForwardDiff.Tag{var"#32#33",Float64},Float64,12},2}, ::UnitRange{Int64}, ::UnitRange{Int64}, ::Int64) at ./multidimensional.jl:774
 [7] _setindex! at ./multidimensional.jl:769 [inlined]
 [8] setindex! at ./abstractarray.jl:1073 [inlined]
 [9] dynamic_problem!(::Array{ForwardDiff.Dual{ForwardDiff.Tag{var"#32#33",Float64},Float64,12},3}, ::Array{ForwardDiff.Dual{ForwardDiff.Tag{var"#32#33",Float64},Float64,12},3}, ::NamedTuple{(:π₁, :Y₁, :Xᶠ₁, :wL₁, :rK₁),Tuple{Array{Float64,3},Array{Float64,2},Array{Float64,2},Array{Float64,1},Array{Float64,2}}}, ::NamedTuple{(:Dᴿ, :L̂, :d̂, :T̂),Tuple{Array{Float64,2},Array{Float64,2},Array{Float64,4},Array{Float64,3}}}, ::NamedTuple{(:T, :NC, :NS, :NK, :β̃ᴸ, :β̃ᴷ, :ψ, :θ, :β̃ᴹ, :ρ, :δ, :α),Tuple{Int64,Int64,Int64,Int64,Array{Float64,2},Array{Float64,3},Array{Float64,2},Int64,Array{Float64,3},Float64,Array{Float64,2},Array{Float64,2}}}) at /Users/pearson/research/gitrepo/replication_exercise_EKNR2016AER/Full_Model/baseline/src/Full_Baseline_Functions.jl:244
 [10] (::var"#32#33")(::Array{ForwardDiff.Dual{ForwardDiff.Tag{var"#32#33",Float64},Float64,12},3}, ::Array{ForwardDiff.Dual{ForwardDiff.Tag{var"#32#33",Float64},Float64,12},3}) at ./none:2
 [11] chunk_mode_jacobian!(::DiffResults.MutableDiffResult{1,Array{Float64,3},Tuple{Array{Float64,2}}}, ::var"#32#33", ::Array{Float64,3}, ::Array{Float64,3}, ::ForwardDiff.JacobianConfig{ForwardDiff.Tag{var"#32#33",Float64},Float64,12,Tuple{Array{ForwardDiff.Dual{ForwardDiff.Tag{var"#32#33",Float64},Float64,12},3},Array{ForwardDiff.Dual{ForwardDiff.Tag{var"#32#33",Float64},Float64,12},3}}}) at /Users/pearson/.julia/packages/ForwardDiff/cXTw0/src/jacobian.jl:213
 [12] jacobian! at /Users/pearson/.julia/packages/ForwardDiff/cXTw0/src/jacobian.jl:76 [inlined]
 [13] (::NLSolversBase.var"#fj_forwarddiff!#24"{var"#32#33",ForwardDiff.JacobianConfig{ForwardDiff.Tag{var"#32#33",Float64},Float64,12,Tuple{Array{ForwardDiff.Dual{ForwardDiff.Tag{var"#32#33",Float64},Float64,12},3},Array{ForwardDiff.Dual{ForwardDiff.Tag{var"#32#33",Float64},Float64,12},3}}},Array{Float64,3}})(::Array{Float64,3}, ::Array{Float64,2}, ::Array{Float64,3}) at /Users/pearson/.julia/packages/NLSolversBase/mGaJg/src/objective_types/oncedifferentiable.jl:160
 [14] value_jacobian!!(::OnceDifferentiable{Array{Float64,3},Array{Float64,2},Array{Float64,3}}, ::Array{Float64,3}, ::Array{Float64,2}, ::Array{Float64,3}) at /Users/pearson/.julia/packages/NLSolversBase/mGaJg/src/interface.jl:124
 [15] value_jacobian!! at /Users/pearson/.julia/packages/NLSolversBase/mGaJg/src/interface.jl:122 [inlined]
 [16] newton_(::OnceDifferentiable{Array{Float64,3},Array{Float64,2},Array{Float64,3}}, ::Array{Float64,3}, ::Float64, ::Float64, ::Int64, ::Bool, ::Bool, ::Bool, ::Static, ::NLsolve.var"#27#29", ::NLsolve.NewtonCache{Array{Float64,3}}) at /Users/pearson/.julia/packages/NLsolve/ZBTu4/src/solvers/newton.jl:48
 [17] #newton#7 at /Users/pearson/.julia/packages/NLsolve/ZBTu4/src/solvers/newton.jl:134 [inlined]
 [18] nlsolve(::OnceDifferentiable{Array{Float64,3},Array{Float64,2},Array{Float64,3}}, ::Array{Float64,3}; method::Symbol, xtol::Float64, ftol::Float64, iterations::Int64, store_trace::Bool, show_trace::Bool, extended_trace::Bool, linesearch::Static, linsolve::NLsolve.var"#27#29", factor::Float64, autoscale::Bool, m::Int64, beta::Int64, aa_start::Int64, droptol::Float64) at /Users/pearson/.julia/packages/NLsolve/ZBTu4/src/nlsolve/nlsolve.jl:23
 [19] nlsolve(::Function, ::Array{Float64,3}; method::Symbol, autodiff::Symbol, inplace::Bool, kwargs::Base.Iterators.Pairs{Symbol,Real,Tuple{Symbol,Symbol},NamedTuple{(:ftol, :show_trace),Tuple{Float64,Bool}}}) at /Users/pearson/.julia/packages/NLsolve/ZBTu4/src/nlsolve/nlsolve.jl:52
 [20] top-level scope at none:0

Due to this MethodError, I switched to initialize the guess of type Real, then the UndefRefError rises.

can you try with function f!(res,guess,param)?

This works fine with a Float64 guess, while doesn’t work with a Real guess(UndefRefError arises). However, I still can not tell why the MethodError exists in my whole problem with Float64 guess.

That’s a pretty good question… It should actually converge in one iteration here…

I see…

Different from the MWE, in the whole problem I didn’t preallocate the array for guess with proper type by using eltype or similar. If I preallocate the array for guess by using eltype, the MethodError disappears.

Also, thank you for your tips for the type constraint!

nice!