Avoiding Repeated Function Compilation

precompilation
#1

I am having trouble precompiling a function I am writing. It is pretty important that I precompile this function as the current compile time is ~700 seconds, and I would like to only pay this the first time I call it with using, even compiling every run of the code is painful.

The function is currently inside its own module, but even though Julia precompiles the module (or at least says that it does), the first function call is still extremely long.

Here is the relevant beginning and end snippet of the file

module Mnlitest2

using OffsetArrays
using TwoFAST # https://github.com/hsgg/TwoFAST.jl

function ξ(Pk,ℓ,n; q=:auto)
    N = 1024
    kmin = exp(-25)
    kmax = exp(25)  
    r0 = 1/kmax

    if q == :auto
        q = max(.5, 1+n)
    end

    r, xi = xicalc(Pk, ℓ, -n, N=N, kmin=kmino, kmax=kmaxo, r0=r0o; q=q)
    return r, xi .* (r .^ (-n))
end

export generateMnli



function generateMnli(Pk::Function; BiasDict22::Dict{String,Int64}, f::Float64)
    M = OffsetArray{Array{Float64}}(undef, 0:4, 0:8, 0:8)
    
    b1 = BiasDict22["b1"]
    bη = BiasDict22["bη"]
    b2 = BiasDict22["b2"]
    bK² = BiasDict22["bK2"]
    bδη = BiasDict22["bδη"]
    bη² = BiasDict22["bη2"]
    bKKpara = BiasDict22["bKK∥"]
    bΠ2para = BiasDict22["bΠ2∥"]
    
    r, xi20 = ξ(Pk,2,0)
    _, xi00 = ξ(Pk,0,0)
    # There are more xi terms but no need to have them all here

    # Below is just one of the terms, many are much longer
    M[4,0,0] = @. ((45*b1^2 - 10*b1*f^2 + 3*f^4)/540. - (f*(5*b1 - f^2)*bη)/90. + (f^2*bη^2)/60.)*xi0m2^2 + 
        ((225*b1^2 - 50*b1*f^2 + 6*f^4)/1350. - (f*(5*b1 - f^2)*bη)/45. + (f^2*bη^2)/30.)*xi2m2^2

    return r, M
end
end

I think the above is a pretty accurate representation of my function. Of course I can provide the entire file, but felt the ~500 lines of code was too much.

I would really appreciate if anyone could help me find what I need to do to get this function precompiled.

0 Likes

#2

I realize this isn’t an answer to your question, but it looks from your sample like the @. annotation is unnecessary, as all of the elements it operates on appear to be scalars. If you remove that macro call, does the compilation time decrease?

0 Likes

#3

I apologize if this was unclear. Each of the xi terms is a 1024 element array.
Everything else is indeed a scalar.

0 Likes

#4

Oh, I see–I thought M was an array of floats, not an array of arrays of floats.

Still, if the structure of each of your lines is the same, then you may see a benefit in terms of compilation time by extracting that out to a function, since the broadcasted expression will only need to be compiled once rather than 500 times. For example, instead of:

@. x1 + 5
@. x2 + 3
...

you might see a benefit from doing:

f(x, y) = @. x + y

f(x1, 5)
f(x2, 3)
...

Of course, this is all speculation given what information I have.

0 Likes

#5

You may be correct that this could improve compile time but I think that even if I could write it this way, I’m not exactly sure I could, it would be a very large amount of work to do so.

Many of these terms are prohibitively long, for example the M[0,0,0] term is 28 lines of statements similar to the one I posted above, and there are many terms with ~20 lines.

I’m honestly not extremely concerned with the compile time. Although I would like it to go down, I’m more concerned with paying it only once, and not even time I run the code as the difference between 10 minute total overhead and 10 minute overhead every run is significant, especially while I’m still developing the rest of the code. I appreciate the advice though.

0 Likes

#6

I may be completely wrong here, but I thought that in order to really profit from precompilation, one needs to put precompile statements into the module (not to be confused with __precompile__) so that the compiler knows for which argument types the specialized method should be (pre)compiled. TBH I’ve never used this feature, and now that I try to look it up in the documentation, I find it neither documented in the section Module initialization and precompilation nor in the Performance Tips. What is the current best practice on using precompile?

0 Likes

#7

Each function is of own type, which is shown as typeof(functionname) and isa Function. So generateMnli may be more dispatched more generally than you need it to be. If that is significant in your use, rewrite it like this:

function generateMnli(Pk::F; BiasDict22::Dict{String,Int64}, f::Float64) where {F<:Function}

You could see if adding this makes any difference

precompile(generateMnli, (Function, Dict{String,Int64}, f::Float64))

Your function ξ may benefit by typing its arguments.

0 Likes

#8

I appreciate the help. Interestingly, if I change the type to be Pk::F ... where {F<:Function} then the precompile actually fails (returns false).
If I leave the definition as is, then precompile seemingly does something, takes about 20 seconds, but the first call to the function still takes ~700 seconds. This happens both where precompile is in the module and when I call it before the first call to generateMnli.

Adding a type signature to ξ doesn’t seem to have made a difference.

Is there any benefit to doing something like defining the input and output type of the input function Pk?

0 Likes

#9

It is pretty important that I precompile this function as the current compile time is ~700 seconds, and I would like to only pay this the first time I call it with using

The only way I know to allocate the time for precompilation on first call to a point in time earlier than the client’s first call is through __init__.

module Module

export afunction

function afunction(args)
   # ...
end

function __init__()
    throwaway = afunction(args)
end

end # module
1 Like

#10

My understanding is that this would compile the function every time I using the package. While I may have to settle for this, I was hoping that I could just pay the cost of compiling once and then run my outer code many times with only paying the compile cost once ever.

Is this simply beyond what Julia compilation does currently?

0 Likes

#11

You could include it in your personal version of Julia’s system image. Some people do this with plotting packages for the same reason.

I have not done this – there are detailed writeups around.
(ask on Slack)

0 Likes

#12

You may want to check out PackageCompiler, which allows fully ahead-of-time compilation of Julia code: [ANN] PackageCompiler.jl It’s still pretty new, but when it works it should allow you to compile your package exactly once, rather than every time.

3 Likes