When and How to use Types

I have a function that depends on two integer parameters. It currently returns a function, but extensions would likely return objects whose dimensions depended on the input parameters. Naive implementation:

using SpecialFunctions

function tps_basis_eta(m::T, d::T) where {T <: Integer}
    @assert 2m>d "Too many dimensions for degrees of smoothness"
    if iseven(d)
        r -> (-1)^(m+1+d/2)*r^(2m-d)*log(r)/
            (2^(2m-1) * π^(d/2) * factorial(m-1) * factorial(m-d/2))
    else
        r -> gamma(d/2 - m) * r^(2m-d)/
            (2^(2m) * π^(d/2) * factorial(m-1))
    end
end

eta1 = tps_basis_eta(2, 1)

Looking at Interpolations I see a different style, in which most of the defining characteristics are expressed as types. And even things which seem as if they might use value types, like the degree of the polynomial, are expressed as named types, e.g., Quartic, Cubic. So I’m wondering if I should be doing the same.

If I started to return, say, an m x d matrix, then I think type stability might be an issue too–although maybe not if the matrix were always 2 dimensional?

Finally, the inner functions have a lot of quantities that only need be computed once, when they are created. Do I need to pull them out explcitly, or is the compiler smart enough to do that? Or perhaps it would not because m and d are from some outer scope and might change?

P.S. I take it julia does not have postfix operators so that I can’t write n! for factorial(n).

Interpolations.jl is written the way it is such that you can obtain a specialized type that takes advantage of multiple dispatch for subsequent operations.

In terms of type stability, you want to get a consistent output type for a set of input types. Sometimes this means encoding some values as types. For a quick way to do so, see Val.

I suggest that you use tools like @code_warntype, @code_lowered, @code_llvm, and @code_native to explore what the compiler is doing.

Let’s take a simple example:

julia> f(x) = 5 + 5
f (generic function with 1 method)

julia> @code_lowered f(1)
CodeInfo(
1 ─ %1 = 5 + 5
└──      return %1
)

julia> @code_llvm f(1)

;  @ REPL[125]:1 within `f'
; Function Attrs: uwtable
define i64 @julia_f_3722(i64) #0 {
top:
  ret i64 10
}

julia> @code_native f(1)
        .text
; ┌ @ REPL[125]:1 within `f'
        pushq   %rbp
        movq    %rsp, %rbp
        movl    $10, %eax
        popq    %rbp
        retq
        nopl    (%rax,%rax)
; └

Above we see that LLVM has figured out that it can precompute 5 + 5 and the assembly shows that the function just returns 10. It does not add 5 + 5 every time.

! is used for negation. The only unary operators are listed here:
https://github.com/JuliaLang/julia/blob/master/src/julia-parser.scm#L99-L100

2 Likes

Concerning the type stability of a function definition like that, I think the important is to declare the output at const or wrap that into a function. For example:

julia> function f(x)
         if x > 1
           x -> sin(x)
         else
           x -> cos(x)
         end
       end

julia> g = f(2)
#1 (generic function with 1 method)

julia> function use_g(x)
         g(x)
       end
use_g (generic function with 1 method)

julia> @code_warntype use_g(π)
Variables
  #self#::Core.Compiler.Const(use_g, false)
  x::Core.Compiler.Const(π, false)

Body::Any
1 ─ %1 = Main.g(x)::Any
└──      return %1


Note the type instability of use_g, because it cannot know the output of g. This is because g is a global variable there, which happened to be bound to the function that was output by f.

You can solve that using const g = f(2), or wrapping the definition of g inside the function, as,

julia> function use_g(x)
         g = f(2)
         g(x)
       end
use_g (generic function with 1 method)

Now, the problem changes if g is being defined at runtime, as in:

julia> function use_g(x)
         g = f(x)
         g(x)
       end
use_g (generic function with 1 method)

I guess in a case like this it is best to avoid that type of function definition, and define the function that returns a value directly. For example, in your case, instead of returning an anonymous function, the function could return the result of the application of the function to a value:

julia> function tps_basis_eta(r, m::T, d::T) where {T <: Integer} # add r 
           @assert 2m>d "Too many dimensions for degrees of smoothness"
           if iseven(d)
               return (-1)^(m+1+d/2)*r^(2m-d)*log(r)/
                   (2^(2m-1) * π^(d/2) * factorial(m-1) * factorial(m-d/2))
           else
               return gamma(d/2 - m) * r^(2m-d)/
                   (2^(2m) * π^(d/2) * factorial(m-1))
           end
       end
tps_basis_eta (generic function with 1 method)


Then you can always define an anonymous function that call this function instead and use it as a parameter for something else:

julia> function use_tps(f::Function, x, y)
          return f(x), f(y)
       end
use_tps (generic function with 1 method)

julia> use_tps(r -> tps_basis_eta(r,2,1), 0.1, 0.2)
(8.333333333333336e-5, 0.0006666666666666669)

So you can still call your function with any values inside use_tps, but that is completely type-stable.

One of my goals is to compute the constant terms once but use them many times; simply adding the r argument to tps_beta_eta does not achieve that.

Maybe I should be thinking more about macros than types to achieve this? Here’s a much simpler case:

function m1(k)
    r -> k*r
end

julia> m1b = m1(3)
#15 (generic function with 1 method)

julia> typeof(m1b)
var"#15#16"{Int64}   # is that type from the argument to m1b or to m1?

julia> @code_warntype m1b(4)
Variables
  #self#::var"#15#16"{Int64}
  r::Int64

Body::Int64
1 ─ %1 = Core.getfield(#self#, :k)::Int64
│   %2 = (%1 * r)::Int64
└──      return %2

julia> @code_llvm m1b(4)

;  @ /home/ross/julia-test/test2.jl:31 within `#15'
define i64 @"julia_#15_1384"([1 x i64]* nocapture nonnull readonly dereferenceable(8), i64) {
top:
  %2 = getelementptr inbounds [1 x i64], [1 x i64]* %0, i64 0, i64 0 #guessing this is retrieving m from outer scope
; ┌ @ int.jl:87 within `*'
   %3 = load i64, i64* %2, align 8
   %4 = mul i64 %3, %1
; └
  ret i64 %4
}

So the generated code does not simply use a constant value for m. Compare doing that explicitly:

function m2(r)
    3*r
end

julia> @code_llvm m2(4)

;  @ /home/ross/julia-test/test2.jl:34 within `m2'
define i64 @julia_m2_1396(i64) {
top:
;  @ /home/ross/julia-test/test2.jl:35 within `m2'
; ┌ @ int.jl:87 within `*'
   %1 = mul i64 %0, 3
; └
  ret i64 %1
}

To make it more like the real case, do some computations on the “constant” values:

function m3(r)
    factorial(3)*r
end

julia> @code_llvm m3(4)

;  @ /home/ross/julia-test/test2.jl:38 within `m3'
define i64 @julia_m3_1397(i64) {
top:
;  @ /home/ross/julia-test/test2.jl:39 within `m3'
; ┌ @ combinatorics.jl:27 within `factorial'
; │┌ @ combinatorics.jl:21 within `factorial_lookup'
; ││┌ @ array.jl:809 within `getindex'
     %1 = load i64*, i64** inttoptr (i64 139697911826928 to i64**), align 8
     %2 = getelementptr inbounds i64, i64* %1, i64 2
     %3 = load i64, i64* %2, align 8
; └└└
; ┌ @ int.jl:87 within `*'
   %4 = mul i64 %3, %0
; └
  ret i64 %4
}

So, 3! isn’t computed and the result stuck in the code, as in m2. It looks as if the factorial call is being resolved to a lookup in a table of precomputed values, and so this is relatively cheap. But still not free, and as the constant expressions get more involved there will be more work.

Maybe Memoize.jl is useful to you.

Anyway, I would probably use a more pedestrian approach, by computing keeping r as a parameter and adding the constant terms, precomputed, to a data structure to be sent to the function as a parameter as well. But of course how practical is that depends on your actual problem.

One alternative which I find cool, although I am not sure if that will fit your needs either, is to use a functor, something like this:

The struct is parametric on the constants, and the constructor computes the constants

julia> struct Test{M,D}
         c1::Float64
         c2::Float64
         Test(m,d) = new{m,d}( m*d, m/d )
       end

Define a functor, which is a function that makes the objects of type Test callable, depending on the constants:

julia> function (t::Test{M,D})(r) where {M,D}
         t.c1*r - t.c2*r
       end

When you define an instance of the type Test, you get the constants:

julia> t = Test(3,4)
Test{3,4}(12.0, 0.75)

And this object can be called with different r, to compute the result:

julia> t(2.5)
28.125

When I used the code in my original post in the harness described at the bottom it took 3.9s to execute. When I pulled the constants out like this

    else
        f = gamma(d/2 - m)/
            (2^(2m) * π^(d/2) * factorial(m-1))
        rexp = 2m-d
        r -> f * r^rexp
        # gamma(d/2 - m) * r^(2m-d)/
        #     (2^(2m) * π^(d/2) * factorial(m-1))

the execution dropped to 0.2s. (The else covers the case when d is odd; I used m=2 and d=1.)

I thought julia was supposed to be smart about optimizing, so I’m a bit surprised the manual intervention seems necessary.

@lmiq suggested precomputing the values and putting them in a struct. I figured that as long as I would be paying to access them in a struct anyway in his plan, I might as well get them direct from the local scope instead. Hence the code above.

It appears the resultant code is getting the constants by lookup, rather than having them directly in the program, as in @mkitti’s When and How to use Types - #2 by mkitti in which the result of a computation (5+5) appears directly in the code as 10. It may be unreasonable to expect that with a value that’s a double.

For the curious, here’s the test framework. It mirrors the intended use of the functions I’ve been discussing, except for the fact I run the whole thing 10,000 times.


struct tps_basis
    "nx x nx distance transform"
    E
    "nx x 2 constraints"
    T
end

"""Construct T and E matrix for the given
one dimensional set of xs
"""
function make_tps_basis(xs)
    @assert ndims(xs) == 1 "only handle one dimensional xs"
    nx = size(xs)
    eta1 = tps_basis_eta(2, 1)

    """Note the following calculation exhibits 2 types of redundancy.
    The result is symmetric and so it is only necessary to compute 
    the upper or lower triangle.
    Second, many of the arguments to eta1 are used repeatedly.
    Would benefit from memoization."""
    E = [eta1(abs(i-j)) for i =xs, j=xs]
    T = hcat(ones(nx), xs)
    tps_basis(E, T)

end

function harness(n)
    for i = 1:n
        make_tps_basis(1:70)
    end
end

@time harness(10000)
1 Like

Consider using GitHub - JuliaCI/BenchmarkTools.jl: A benchmarking framework for the Julia language for timing

3 Likes

Julia still uses a compiler to do optimization passes. In order for the compiler to replace some of the calculations you sometimes need to help the compiler prove the values or functions will not change.

Your questions are heading in the direction of constant propagation. You may want to start a new post specifically on that topic to get the best responses.

Here are are some prior posts: