Unnecessary compilation happening on every call

I seem to have a case where the function gets compiled on almost every call. This is annoyingly slow.

julia> notice(logger, "Before call"); qlp_run()
[2018-11-15 16:13:13 | notice | root]: Before call
[2018-11-15 16:14:18 | notice | root]: First line
...
...
[2018-11-15 16:14:20 | notice | root]: Starting shutdown

Then I run again immediately in the same Julia session:

julia> notice(logger, "Before call"); qlp_run()
[2018-11-15 16:14:25 | notice | root]: Before call
[2018-11-15 16:15:29 | notice | root]: First line
...
...

With top of the function definition starting:

function qlp_run(fileStartidx = 0, fileProcPeriod = 1; mode="offline", configFilename = "config.json", inputPath="/run/shm/ifs/")
notice(logger, "First line")
...

My code currently lives in the main module.

Any suggestions of what is causing it and how to improve things? I am suspecting it is the use of keyword arguments, but I have on occasion had it that subsequent calls execute immediately, but that seems to be the exception.

Please provide a self-contained, minimal working example.

For instance, what is notice, where is it coming from, what julia version are you running, etc.

1 Like

Sorry I should have atleast included some more details.

Here is the includes:

using Statistics
using FFTW
using DSP
using JSON
using Memento
using Unmarshal
using Sockets
using LinearAlgebra
using Nullables

import Base.push!

notice is from the logging library Memento it was just a quick and easy way to print the timestamp, since I already use it later in the code. I can probably modify it to print the time using a more standardised method without any difference, since the problem was there before I included those 2 notice calls (before the function call and as first line of the function body).

(v1.0) pkg> status
    Status `~/.julia/environments/v1.0/Project.toml`
  [6e4b80f9] BenchmarkTools v0.4.1
  [35d6a980] ColorSchemes v2.0.0
  [5ae59095] Colors v0.9.5
  [8f4d0f93] Conda v1.1.1
  [5a033b19] CurveFit v0.1.1+ [`~/.julia/dev/CurveFit`]
  [717857b8] DSP v0.5.1
  [7a1cc6ca] FFTW v0.2.4
  [5789e2e9] FileIO v1.0.2
  [0ef565a4] Geodesy v0.5.0
  [7073ff75] IJulia v1.13.0
  [916415d5] Images v0.16.1
  [682c06a0] JSON v0.19.0
  [b964fa9f] LaTeXStrings v1.0.3
  [ee78f7c6] Makie v0.9.0
  [f28f55f0] Memento v0.10.0
  [4d1e1d77] Nullables v0.0.8
  [47be7bcc] ORCA v0.1.1
  [f0f68f2c] PlotlyJS v0.11.2
  [91a5bcdd] Plots v0.21.0
  [438e738f] PyCall v1.18.5
  [d330b81b] PyPlot v2.6.3
  [cbff2730] Unmarshal v0.2.1
  [37e2e46d] LinearAlgebra 
  [10745b16] Statistics 

julia> versioninfo()
Julia Version 1.0.1
Commit 0d713926f8 (2018-09-29 19:05 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.0 (ORCJIT, haswell)

I would love to provide a self-contained minimal example, but am not sure how I would even approach it.

Currently the function is 950 lines of a 7k code-base split over numerous files (excluding “standard” packages) Yes, I know that is bad in terms of program structure, but splitting it further into smaller functions is difficult since the only way I could get fast processing was to make everything pre-allocated, which means function calls become cluttered with temporary variables. I am still trying to refactor to make this better. Part of the reason behind the question is thus to understand if I restructure what still ends up being compiled (function and all called functions or are the called functions only compiled once? How is inlining decisions made?). How important is it to split into separate modules?

I believe I am observing the problem due to the function being so big. If I create a 5 line example, how will I know whether it is being recompiled every time or only once?

I was hoping to reproduce and Ctrl-C to attempt to get a stack track to see if that can give clues about where it is busy (but even if I manage to get a stack trace, I doubt it will reveal why it is recompiling instead of just using the previous code), but now it seems to execute immediately with no compile delay. I will need to wait until it recurs before I can try to get that info.

A function is being compiled only on it’s first call, no matter its length in lines of code. There definitely shouldn’t be a minute of waiting time between calling the function and getting to the first line. I’d link this, but as you use a lot of big packages it seems like this would take too much time to reduce.

If you’re able to share code that deterministically shows this behaviour, please do.

As for your other questions - smaller functions in general allow faster execution, since there’s more specialized versions of code available to be called. It may be that the logging notice only showed the problem and in truth the main function itself is slow - in that case, it’d be worthwhile to check @code_warntype for type instability or profile the code. As always, the general performance tips apply.

Unfortunately if you start using keyword arguments @code_warntype becomes more difficult to use. The standard call just shows calling the inner with defaults populated. You cannot actually call the true inner function as far as I have been able to see.
I have in the past changed the function definition to remove the keyword arguments as just define them in the function body to atleast get some insight. Some instabilities remain, but I also haven’t been able to decode all of what that is telling me in the new format (the old format of code_warntype was for me easier to decode to get a sense of where things are going wrong).

julia> @code_warntype qlp_run()
Body::Any
120 1 ─ %1 = invoke Main.:(#qlp_run#45)("offline"::String, "config.json"::String, "/run/shm/ifs/"::String, _1::Function, 0::Int64, 1::Int64)::Any   │╻ qlp_run
    └──      return %1                                                                                                                              │ 

julia> @code_warntype qlp_run("offline", "config.json", "/run/shm/ifs/", 1, 0, 1)

julia> @code_warntype qlp_run(mode="offline", configFilename = "config.json", inputPath="/run/shm/ifs/", 1, 0, 1)

julia> @code_warntype qlp_run(1,0,1, mode="offline", configFilename = "config.json", inputPath="/run/shm/ifs/")

I am trying to follow the performance tips as I understood it for v0.6, I still need to check the details to properly understand nuances that might have changed with v1.0. But even if my code is not 100% performant and contains some type instabilities, the minute delay doesn’t make sense.
It feels as if every now and again I get Julia into a state that it believes something is dirty (for example package updated on disk, but already using older version in memory and then it keeps trying to recompile to to get into a consistent state, but I see this on julia runs where I didn’t do an explicit Pkg.update(), thus it shouldn’t be that).

Thanks for the suggestions thus far, I will look when it recurs and try to see if that time I get more clues. Any other suggestions on what else to look out for would also be appreciated.

Did you solve the problem [lwabeke]? I have a similar problem in a big function with keyword arguments. I have to spend around 5 sec each time I call the function.

Unfortunately not. At times, it seemed to go away for a while only to then reappear when I seemed to have only made minor changes, none of which should have explain its behaviour (for example a Pkg.update or a small change in some sub-function).

I started thinking about wrapping the code as a module to hopefully get benefits from pre-compilation. But I cannot say that my attempts thus far has solved the issue.

While looking at pre-compilation, I found mention of

and the linked
https://juliacomputing.com/blog/2016/02/09/static-julia.html

Both of which are not slightly outdated, but they seemed to indicate a way to see what is being compiled. Maybe directly looking at

might give clues.

Those links will likely be my next course of action to resolve this, but in the meantime I’m looking at more immediate problems and leaving this in the hope that it will be resolved with a newer release or inputs from someone else will be the key to solving it. Or refactoring to remove keyword arguments in the hope it will then go away as well with a simpler function signature.

If you learn anymore more, please share on what you have found out.

I got a minimal working example of unnecessary compilation. See the times spend for each call to function func1 inside func2. It looks like func1 is being recompiled each time func2 is called.

function func1()
    keys = (1, 2, 3)[1:2]
    # by changing the line above by the one below, the problem disappears
    #keys = [1, 2, 3][1:2]

    list = Int[ key for i=1:2 for key in keys ]
    # or by changing the line above by the one below, the problem also disappears
    #list = Int[ key for key in keys ]
end


function func2()
    fun = eval( :(x -> x) )
    # or by deleting the line above the problem also disappears
    
    for i=1:4
        time1 = time()
        func1()
        println("  time spend by func1:  ", time() - time1)
    end
   
end


for i=1:3
    println("\nRunning func2 (call $i):")
    func2()
end

Resutls:

Running func2 (call 1):
  time spend by func1:  0.04088711738586426
  time spend by func1:  1.7881393432617188e-5
  time spend by func1:  5.9604644775390625e-6
  time spend by func1:  5.0067901611328125e-6

Running func2 (call 2):
  time spend by func1:  0.040904998779296875
  time spend by func1:  2.002716064453125e-5
  time spend by func1:  4.76837158203125e-6
  time spend by func1:  4.0531158447265625e-6

Running func2 (call 3):
  time spend by func1:  0.04025602340698242
  time spend by func1:  2.3126602172851562e-5
  time spend by func1:  5.9604644775390625e-6
  time spend by func1:  5.9604644775390625e-6


It looks to me that it is a combination of factors that promotes unnecessary recompilation since by making independent (not related) changes to the code recompilation does not occur.

I found some possible culprits:

  • list comprehensions with multiple “for” iterating on tuples
  • the use of eval function

Hope that this minimal working example could help to find the problem of unnecessary compilation.

Interesting that by replacing the line fun = eval( :(x -> x) ) with another expression not being a function, e.g. fun = eval( :(3^2) ) it compiles only once. I am starting to believe that unnecessary recompilation may happen when new functions are defined at run time.

Don’t call eval (which indeed can trigger recompilation) to define new functions. Just define new functions directly: say fun = x -> x and not fun = eval(:(x -> x)).

Thanks @stevengj for the advice. In my original code the expression for the lambda function is generated using some user input. That’s why I need to use eval to define a function at run time. Since it can trigger recompilation I will have to think on another solution to evaluate these expressions.

If users are calling your code as a library, usually they should pass functions directly rather than passing them as strings.

Fwiw, I verified that recompilation also occurs by calling Base.invokelatest() inside a function. I think it was introduced in Julia 0.6 (or 0.7) that is since I noticed the recompilation issue.

If you call eval there’s going to be compilation, that’s neither surprising nor a bug.

Sure, but note in the mwe above that eval was used outside func1 which seems to be recompiled on each call of func2. I don’t think that it is a bug but may be I missed something. In any case I will avoid the use of eval at run time.

The code where I experienced the problem didn’t utilise eval at all. It did use some anonymous functions as part of comprehensions. It shouldn’t be neccesary to do any real dynamic dispatch, since every call utilised the same variable takes etc.
However I realise just now that I am reading and loading some data from JSON input files. It could be that some of those variables are not type constrained, causing some part of the code to be dynamically interpreted. That might be part of my problem. What I still then couldn’t be able to explain is why it at times goes away (subsequent calls start within a second) and then just with a minor code mod (and/or package update), would reappear where subsequent calls would again take a minute to “compile”.
I have in the meantime also moved my code to a module instead of being in Main itself and this issue seems to have (mostly?) disappeared. I’m also mostly calling it from an extra wrapper function, instead of from REPL, which might be helping. I’ll probably be calling it directly from the REPL again during the next few days, so I will try to watch to see if notice it again.