Compilation performance. Branching


My physical model consists of results requested, test subjects and variable set of phenomena acting upon the subjects. It’s convenient to keep all phenomena-related function calls conditioned by if-else statements within main program for test-driven development and retrospection, but compilation time grows as big as 22 minutes. When non-production branches are cut, i.e. control flow is solved, manually, compilation time is reduced to 2 minutes. This is how i see original problem:
X=1:N. AX is code.

  1. Create file “dummy.jl”. Write:
function dummy(;kwarg1=false,...,kwargN=false)
  for j=1:T
    if kwarg1
      execute A1
    if kwarg2
      execute A2
    if kwargN
      execute AN
  1. Execute:

Do we already know, how does compilation time depend on N?
Say, if kwarg1 is true, is compilation time dominated by O(A1)? Is compilation time dominated by O(dummy)?

I wonder if i may make version control system to merge just enough code for a task, where whole argument array of length 200+, destined to dummy’s real-world prototype, will be supplied to shell script, calling for e.g. git.


First, please quote your code.

I your problem truly has this simple structure, then I would guess that compile time would be dominated by O(1) factors; since there is nothing to infer. Also, a global A may be the source of most of your performance problems at runtime.

But why aren’t you using something like

(kwarg1 || kwarg2 || ...) ? A : nothing

or even

dummy(; kwargs...) = any(values(kwargs)) ? A : nothing


Thank you. I did mean “A” is not global variable, but code. I rewrite the post.


Can you afford runtime branches? Then you could add @noinline.

I think your problem is that inlining / IPO / constant prop lead to many versions of your function getting compiled. Each version initially contains all the code, and dead code elimination is afaik a pretty late step (in other words: you pay for inference and optimization of dead code during compilation).

This might have significant runtime costs, though (benchmark!).