UndefRefError: access to undefined reference

I appreciate if someone can comment on the following behavior. Please consider the code below for an estimator type:

type SimpleKriging{T<:Real} <: AbstractEstimator
  # input fields
  X::AbstractMatrix{T}
  z::AbstractVector{T}
  cov::CovarianceModel
  μ::T

  # state fields
  LLᵀ::Cholesky

  function SimpleKriging(X, z, cov, μ)
    @assert size(X, 2) == length(z) "incorrect data configuration"
    SK = new(X, z, cov, μ)
    fit!(SK, X)
    SK
  end
end
SimpleKriging(X, z, cov, μ) = SimpleKriging{eltype(z)}(X, z, cov, μ)

function fit!{T<:Real}(estimator::SimpleKriging{T}, X::AbstractMatrix{T})
  C = pairwise(estimator.cov, X)
  estimator.LLᵀ = cholfact(C)
end

function estimate{T<:Real}(estimator::SimpleKriging{T}, xₒ::AbstractVector{T})
  X = estimator.X; z = estimator.z
  cov = estimator.cov; μ = estimator.μ
  LLᵀ = estimator.LLᵀ
  nobs = length(z)

  # evaluate covariance at location
  c = Float64[cov(norm(X[:,j]-xₒ)) for j=1:nobs]

  # solve linear system
  y = z - μ
  λ = LLᵀ \ c

  # return estimate and variance
  μ + y⋅λ, cov(0) - c⋅λ
end

The code is working with Float64, but when I try lower precision as in the script below, I get UndefRefError: access to undefined reference and a line number that points to the field LLᵀ::Cholesky. The exact line of code that triggers the error is the line μ2, σ² = estimate(simkrig2, xₒ):

# create some data
dim, nobs = 3, 100
X1 = rand(dim, nobs); z = rand(nobs)
X2 = convert(Matrix{Float32},X1)

# define a covariance type...
cov = some_function_of_x_and_y

# construct estimators
simkrig1 = SimpleKriging(X1, z, cov, mean(z))
simkrig2 = SimpleKriging(X2, z, cov, mean(z))

# estimate
μ1, σ² = estimate(simkrig1, xₒ)
μ2, σ² = estimate(simkrig2, xₒ)

I wonder if this is the correct idiom in Julia for initializing state fields in a class. These are fields that are expensive to compute and I would like to have them computed once upon instantiation, or later if the user needs to retrain with the fit! method. Pretty much like the Python’s sklearn API.

Could you post a runnable version? It would be easier to help if I could run it. But if I had to guess, I would be suspect that estimator.LLᵀ = cholfact(C) might be messing up? I would add a println/show above and below it and make sure that is actually adding a value to the estimator.

By the way, your type is not strictly typed since you’re using abstract types. This could lead to type-instabilities. You may want to consider making the types more concrete / parametric.

Hi @ChrisRackauckas, sure, I pushed the changes to GitHub even though they are not working for lower precision yet, below is a minimum working example with the error:

Pkg.add("GeoStats")
Pkg.checkout("GeoStats")
using GeoStats

srand(2017)

# create some data
dim, nobs = 3, 100
X1 = rand(dim, nobs); z = rand(nobs)
X2 = convert(Matrix{Float32},X1)

# target location
xₒ = rand(dim)

# define a covariance model
cov = GaussianCovariance(0.,1.,1.)

simkrig1 = SimpleKriging(X1, z, cov, mean(z))
simkrig2 = SimpleKriging(X2, z, cov, mean(z))

# estimation
μ1, σ² = estimate(simkrig1, xₒ)
μ2, σ² = estimate(simkrig2, xₒ)

println(μ1, " ", μ2)

Cool. Your issue is further back.

simkrig2 = SimpleKriging(X2, z, cov, mean(z))

that still has the matrix undefined. The reason is here:

function fit!{T<:Real}(estimator::SimpleKriging{T}, X::AbstractMatrix{T})
  C = pairwise(estimator.cov, X)
  estimator.LLᵀ = cholfact(C)
end

This requires that the element type of estimator and X is the same. However, they are not: one of them is Float64 and the other is Float32. Thus this goes to your fallback method and does nothing.

The fix is to split the types:

function fit!{T<:Real,T2<:Real}(estimator::SimpleKriging{T}, X::AbstractMatrix{T2})
  C = pairwise(estimator.cov, X)
  estimator.LLᵀ = cholfact(C)
end

or I would just duck type it:

function fit!(estimator, X)
  C = pairwise(estimator.cov, X)
  estimator.LLᵀ = cholfact(C)
end

You’ll still have an error because it cannot cholfact the matrix since it’s saying it’s not positive definite, but hey, now it’s calling the right method.

1 Like

@ChrisRackauckas could you please elaborate on the abstract type comment you made? Am I using abstract types in an inefficient manner?

You’re using concrete types in an inefficient manner by typing the fields abstractly. Here’s a quick example:

type Foo
  a::Float64
end
type Foo2
  a::Number
end
A = Foo(1.0)
A2= Foo2(1.0)

function f(A)
  tmp = 0.0
  for i in 1:1000
    tmp += A.a
  end
end

using BenchmarkTools
@benchmark f($A)
@benchmark f($A2)
# A
Trial(2.634 ns)
# A2
Trial(14.344 μs)

That’s a 5446x slowdown by using an abstract field in the type. The reason is because Julia cannot know what kind of number is in A2. Thus it cannot compile f in a way that uses information about A2.a, and has to compile it generically. By “compile it generically” I mean the types don’t infer:

Variables:
  #self#::#f
  A::Foo2
  tmp::ANY
  #temp#::Int64
  i::Int64

Body:
  begin 
      tmp::ANY = 0.0 # line 12:
      SSAValue(2) = (Base.select_value)((Base.sle_int)(1,1000)::Bool,1000,(Base.box)(Int64,(Base.sub_int)(1,1)))::Int64
      #temp#::Int64 = 1
      5: 
      unless (Base.box)(Base.Bool,(Base.not_int)((#temp#::Int64 === (Base.box)(Int64,(Base.add_int)(SSAValue(2),1)))::Bool)) goto 15
      SSAValue(3) = #temp#::Int64
      SSAValue(4) = (Base.box)(Int64,(Base.add_int)(#temp#::Int64,1))
      i::Int64 = SSAValue(3)
      #temp#::Int64 = SSAValue(4) # line 13:
      tmp::ANY = (tmp::ANY + (Core.getfield)(A::Foo2,:a)::NUMBER)::ANY
      13: 
      goto 5
      15: 
      return
  end::Void

Notice the ANY in there: that’s the lack of information. However, the same function on A compiles to efficient code:

Variables:
  #self#::#f
  A::Foo
  tmp::Float64
  #temp#::Int64
  i::Int64

Body:
  begin 
      tmp::Float64 = 0.0 # line 12:
      SSAValue(2) = (Base.select_value)((Base.sle_int)(1,1000)::Bool,1000,(Base.box)(Int64,(Base.sub_int)(1,1)))::Int64
      #temp#::Int64 = 1
      5: 
      unless (Base.box)(Base.Bool,(Base.not_int)((#temp#::Int64 === (Base.box)(Int64,(Base.add_int)(SSAValue(2),1)))::Bool)) goto 15
      SSAValue(3) = #temp#::Int64
      SSAValue(4) = (Base.box)(Int64,(Base.add_int)(#temp#::Int64,1))
      i::Int64 = SSAValue(3)
      #temp#::Int64 = SSAValue(4) # line 13:
      tmp::Float64 = (Base.box)(Base.Float64,(Base.add_float)(tmp::Float64,(Core.getfield)(A::Foo,:a)::Float64))
      13: 
      goto 5
      15: 
      return
  end::Void

This means a few things. First of all, notice that the function, not being typed at all, can produce efficient code. This is because functions auto-specialize on types, meaning that type annotations in function signatures are only for dispatch. I.e. changing

function f(x)
...
end

to

function f(x::Float64)
...
end

is not different in the way it performs. The reason is because the compiler can use the type information at compile time (which is when internal dispatches occur) to specialize this function and optimize it. This is why using abstract types in functions like this is a good idea: it doesn’t impact performance, and it lets functions be very general in use.

However, that is not the case for types. For types, the fields need to be concretely typed in order for the compiler to know what types are in there. If it does not know what types are in there, it cannot know what the type will be when you access the field, and thus everything loses its inferability. I.e. type fields do not auto-specialize, and so they should be strictly typed.

If you need a type to work over some abstract type, you can use a type parameter. For example:

type Foo3{T<:Number}
  a::T
end

will be efficient for any numbers, since this type keeps the information about what type is in there so that way the compiler can do the previously mentioned optimizations.

Moral of the story, strictly type your types, but loosely type your functions.

5 Likes

Awesome answer, I think I am on the same page. In my example specifically, you are suggesting to replace the fields AbstractMatrix by concrete ones? https://github.com/juliohm/GeoStats.jl/blob/master/src/simple_kriging.jl

Yes, or make the whole field a type parameter like

type Foo{T<:AbstractMatrix}
  a::T
end

You cannot in v0.5 then control what the element type is since that’s related to triangular dispatch, but you can in v0.6 with the new where syntax.

1 Like

Great response from @ChrisRackauckas.

For versions before v0.6, though, you can also define an inner constructor for the type to enforce the element type:

using Compat

type Foo{T<:AbstractMatrix}
  a::T
  
  @compat function (::Type{Foo})(A::AbstractMatrix)
    eltype(A) <: Real || error("Foo(A): `A` needs to contain `Real`s")
    new{typeof(A)}(A)
  end
end