When does concrete type inference not matter for performance?

The docstring for code_warntype states that

Not all non-leaf types are particularly problematic for performance, and the performance characteristics of a particular type is an implementation detail of the compiler. code_warntype will err on the side of coloring types red if they might be a performance concern, so some types may be colored red even if they do not impact performance.

It’s unclear when type-stability doesn’t matter. Any pointers on this implementation detail would be appreciated. If a certain type-instability doesn’t matter for performance, there’s no point spending time in trying to fix it.

4 Likes

I added that caviat to the docs. The problem is that which instabilities are slow depends on the exact optimizations of the compiler. And these are constantly changing as the compiler improves. Hence, writing these cases out ib the docs would be making a promise about behaviour that can’t possibly be kept.

On possibility would be to define a set of situations in which we can confidently guarantee that the compiler will produce fast code. For example, there is simply not way that the Julia compiler will stop doing union splitting with two concrete types. But I don’t know what would otherwise be on such a list of promised optimisations.

4 Likes

Even union splitting has some performance overhead, but it’s often small. So it basically always helps to remove the warnings, but it may not have a good cost/benefit for you time.

Its also not linear, so you cant make definitive statements. In my experience if you have lots of union splitting in a long complex function that has other type stability problems, both compile time and run time may suffer more than a single union split in a simple function. And worse, fixing something unrelated may change how much the type instability matters.

You can only really know by also using ProfileView.jl and BenchmarkTools.jl and checking.

2 Likes

Starting from ProfileView and using descend_clicked() (after clicking on a top-level red bar) is a pretty sure-fire way to focus on inference problems that matter for runtime performance.

Another thing to think about, though, is the following: what are the costs of poorly-inferred code? There are several, but the most obvious one is runtime method dispatch. So any code that is poorly inferred but which doesn’t have to make guesses about which method (and specialization) will be called may not cost you much. A trivial example is a function that has only one method and you’ve @nospecialized any arguments inferred to be ::Any.

If a certain type-instability doesn’t matter for performance, there’s no point spending time in trying to fix it.

That is indeed mostly true. But someday we’ll have static compilation, and then you might find yourself glad for well-inferred code even in cases where it doesn’t matter for runtime performance. (Runtime dispatch is the #1 threat to high-quality static compilation.)

3 Likes

It’s common for runtime single dispatch to be supported by statically compiled languages. Wouldn’t it be too restrictive to exclude runtime dispatch completely as a condition for (future) static compilation of Julia code?

1 Like

This is getting a bit speculative since we don’t yet have a static compilation solution in Julia proper. But my crystal ball says this: it’s not hard to support runtime dispatch, the bad part is that it makes it hard for Julia to discover what code needs to be cached in the static library. Inferrable code makes discovery easy because you can predict all the method dispatches in advance: from the entry point(s) specified by whoever is compiling the library, you just recursively follow the calls in the inferred code and add each callee to the list of method specializations that need to be cached. With runtime dispatch, by definition you don’t know the callee in advance. So how do you know which methods, and which method specializations, to include and which to exclude? “Including everything” is likely impractical (huge binaries) and still may not guarantee that you won’t wish you could compile a new specialization.

There are potential solutions, but it’s far too much (and too speculative) to discuss at present.

3 Likes

That might be worth adding to the documentation.

It seems like useful information to this case.