Why is this code not type stable?

inner(pc) = sum(pc)
outer(pc,σ) = prod(pc,σ)

tst = let
    pc = rand(3,10)
    σ = 2.0
    tst() = losses = outer(inner(pc), σ)

What do you want to achieve here?

Isn’t just

tst(pc,σ) = outer(inner(pc),σ)

tst(rand(3,10), 2.0)) 

what you want there?

I’m just trying to come up with a contrived example that illustrates the problem in much more complicated code. I don’t understand the type instability with closures. I will see if i can construct a closer example.

1 Like

If the variable that is closed-over is a global variable, not constant and not typed, then the closure will be type unstable. For example:

julia> a = 1.0

julia> f(x) = x - a
f (generic function with 1 method)

julia> @code_warntype(f(1.0))
MethodInstance for f(::Float64)
  from f(x) in Main at REPL[5]:1
1 ─ %1 = (x - Main.a)::Any
└──      return %1

That’s type unstable because the type of a cannot be known. Also there is performance of captured variables in closures · Issue #15276 · JuliaLang/julia · GitHub, which is a long standing difficulty on using closures in many contexts.

This is type unstable,

    pc = problem.cspond
    σ = 4.0
#    pc = pc, σ = σ
    calc_loss = let
        losses = Vector{Float64}(undef, length(pc))
        calc_loss(F::FundamentalMatrix)::Vector{Float64} = losses .= cauchy_ρ.(fund_mat_signed_sampson_dist.(pc, Ref(F)), σ)

while this is type stable,

    pc = problem.cspond
    σ = 4.0
    calc_loss = let pc = pc, σ = σ
        losses = Vector{Float64}(undef, length(pc))
        calc_loss(F::FundamentalMatrix)::Vector{Float64} = losses .= cauchy_ρ.(fund_mat_signed_sampson_dist.(pc, Ref(F)), σ)

I guess it’s related to the bug you referenced. I tried to use FastClosures, but I could not understand how to use it in this context.

If you are running that in the global scope, it will be type unstable just because pc and sigma are non-constant globals in the first case, and aren’t in the second.

Otherwise it is probably related to that bug.

1 Like

no, pc and sigma are declared in the context of a function. it was a bit hard to construct a minimal working example here.

1 Like

Then its the bug. I think using the assignments on the let are effectively the way to handle that.

I don’t understand why that fixes things. But also isn’t there a fastclosure package that is supposed to fix this? how do you use the macro?

I think there it would be just:

@closure calc_loss(F::FundamentalMatrix)::Vector{Float64} = losses .= cauchy_ρ.(fund_mat_signed_sampson_dist.(pc, Ref(F)), σ)

(replacing the let block, with the definition of losses, pc and sigma outside there.

I think i tried that and there was some compile error. Ok, I will take a look again. This bug is a big problem because optimization code often needs closure for reasonable syntax.

The problem is more when modifying the closed over variables. In your case the losses variable is being closed over and modified. Maybe it is more clear if you define it inside the calc_loss function, or if you pass it as a parameter as well.

(but yes, it is an annoying bug)


if I make losses an lvalue, is it possible to avoid a temporary array created inside calc_loss, in other words,

losses .= calc_loss(...)

I get a bit confused if the compiler can optimize this.

only if losses is a small static array, otherwise it can’t.

The other way (which I like most, is make calc_losses a callable object that carries the buffer. Something like:

julia> struct CalcLosses

julia> (c::CalcLosses)(x::Vector{Float64}) = c.losses .= c.a .* x

julia> calc_losses = CalcLosses(ones(10), 2.0)
CalcLosses([1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], 2.0)

julia> calc_losses(rand(10))
10-element Vector{Float64}:

I don’t know if every type of optimization interface can handle that, though.

1 Like
(c::CalcLosses)(x::Vector{Float64}) = c.losses .= c.a .* x

you’ve lost me here :slight_smile: Can you explain this syntax?

Functions in julia aren’t the only things that can have methods. In fact, functions are just a special case of something called “callable structs”. The idea here, is that you can treat the struct CalcLosses as a function itself, but that function stores data it can reference.

So when they write

(c::CalcLosses)(x::Vector{Float64}) = c.losses .= c.a .* x

it’s saying that for any c whose type is CalcLosses, you can write c(x) where x is of type Vector{Float64} and it will compute c.losses .= c.a .* x.


But this functionality seems very similar to what can be achieved with closures. Besides avoiding the compiler bug or limitation, what is the advantage of this construction?

1 Like

I didn’t say it was different. In fact, this is exactly how closures work. The problem is just that closures can’t assume that various things are type stable, so you essentially end up with a struct that looks like

struct CalcLosses

or something if it can’t assume that losses and a dont’ change type.


Counterintuitively, losses is an rvalue here because this expression lowers to a call to Base.materialize!(losses, ...). You can confirm this like so:

julia> _ = [1,2,3];

julia> _ .= [1,2,3];
ERROR: syntax: all-underscore identifier used as rvalue

It’s similar to how in the expression x[1:2] = 3:4, x is an rvalue because the expression is lowered to setindex!(x, 3:4, 1:2). Not to worry, this is a good thing; because it’s an rvalue, the identifier is not reassigned.

However, from your description above, the actual type instability is likely occuring because pc and/or σ are assigned to more than once in the parent scope, making it so that calc_loss doesn’t know what value to assume when it’s called, so it boxes them. This is why the let block trick works: it captures a snapshot of their values and makes it so the closure can access only the value at that specific point.

1 Like