Thoughts around ways of ensuring compile time evaluation / const prop / inference?

When writing julia code, I find a lot of my time is spent on a common themes: trying to ensure some piece of code is either constant propagating or inferring as I intend / expect. I find this stuff quite difficult to systematically test for or ensure - it’s often a bit of trial and error, and is quite tedious. The annoying thing is that if this gets messed up, nothing errors - the compiler just generates bad code. It can be quite easy to make a change, and for it to negatively affect some downstream codepath.

Has there been much discussion around ways of communicating these expectations to the compiler, and erroring at compile time if they are violated?

For the constant propagation side of things, I was imagining something analogous to constexpr in C++ - just a way of saying “this variable value or function return value should be known to you, compiler”.

For the inference side of things, it is a little harder due to the issue of deciding what counts as “bad” inference / a degradation in type-specificity - has anyone given this much thought? Does the compiler currently have some internal notion of this?
Irrespective, there are probably some simple but immediately useful heuristics that could be attached to methods somehow. For example: “there should be any Anys in the return type of this method”, or “there shouldn’t be any Unions in the return type of this method”, or perhaps something like “if none of the inputs had Any in them, there shouldn’t be any Anys in the return type” - etc.

Looking forward to your thoughts.


@NHDaly has many


Just checking, do you know about

julia> using Test

julia> foo(x, y) = x + y
foo (generic function with 1 method)

julia> bar(x, y) = +(x, y...)
bar (generic function with 1 method)

julia> @inferred foo(3, 4)

julia> @inferred bar(3, Any[4])
ERROR: return type Int64 does not match inferred return type Any
 [1] error(::String) at ./error.jl:33
 [2] top-level scope at REPL[5]:1

Thanks - I’m aware of @inferred. It’s definitely useful, but it’s not very general (requires writing the right tests and only verifies the types you test it on). I also feel it would be nicer to have something closer to where the methods are defined.

If I write a julia function

f(x, y, z) = x^2 + y / z

that function is impossible to do any sort of static type checking on within the age of the universe. The reason is simple: there is a combinatorial explosion in the number of types to check.

Each argument of the function signature is unconstrained so any T<:Any may end up in each of the arguments. Let’s suppose for the sake of argument that there are 300 concrete types in julia and 50 parametric types that have only one parameter with no constraints on it other than that the parameter must be a concrete type (the actual situation is much much more extreme than this). Each of those 20 parametric types can then be an extra 100 types. That means that in this minimal version of julia, there are 2100 types that could possibly go in each of the arguments to f. f takes 3 arguments, so there are 9,261,000,000 different type signatures that could be assigned to f.

If we suppose that inferring the output type signature of each function type signature takes 1µs, then it would take 41 days to do inference on all the different type signatures. In reality, julia has many parametric types with multiple parameters and those parameters do not need to be concrete types. In fact, they don’t even have to be types. They can be (isbits) values.

For instance, arrays take an integer parameter for the number of dimensions. That means there are 1.8e19 different T <: (Array{Float64, N} where {N}) on a 64 bit machine.

The cost we pay for the expressiveness of multiple dispatch and parametric types is that we cannot use brute force to check an unconstrained function signature for what types are valid inputs and what the inferred output type is.

That said, there’s a lot that can still be done if you’re willing to restrict your functions to a very small corner of the type-space or to specify a set of types to check a given function on. For instance, this blog post does some static type checking, though obviously one needs to be very careful to avoid the combinatorial explosion in type checking time.

Yeah, of course - that would indeed be silly.
I was more imagining that some checks could be run during each compilation of the method - i.e. once per unique type tuple the method is called with.

Thanks for the link, it’s interesting. I wonder if what I’m imagining above for could just be implemented with a generated function and some trickery - need some time to think about it…

Yes, that can be done:

_f(x, y, z) = x^2 + y / z
@generated function f(x, y, z)
    out_type = Core.Compiler.return_type(_f, Tuple{x, y, z})
    Core.println("statically inferred return type was $out_type")
    :(_f(x, y, z))

julia> f(1,2,3)
statically inferred return type was Float64

julia> f(4,5,5)

I’m playing with writing a macro to automate that, but it’s not the easiest thing to do.


I got my macro working: here it is

using MacroTools: splitdef, combinedef

macro checked(fdef)
    d = splitdef(fdef)

    f = d[:name]
    args = d[:args]
    whereparams = d[:whereparams]

    d[:name] = gensym()
    shadow_fdef = combinedef(d)
    M = __module__

        @generated function $f($(args...)) where {$(whereparams...)}
            d = $d
            T = Tuple{$(args...)}
            out_type = Core.Compiler.return_type($(d[:name]), Tuple{$(args...)})
            Core.println("statically inferred return type was $out_type")
            args = $args
    end |> esc

Now at the REPL

julia> @checked f(x, y) = x + y
f (generic function with 1 method)

julia> f(1, 2)
statically inferred return type was Int64

julia> f(1, 2)

Thanks Mason, that’s a really neat macro - I will try and extend it to meet my usecases. I could see myself using it a lot.

Does anyone have any thoughts on the constexpr side of things? Getting some guarantees about constant propagation would be fantastic (in modern C++ (20), you can pretty much do anything with constexpr).

@checked looks like a nice idea. Do you think it could be placed in a package (or added to an appropriate existing one)?

Bump - any thoughts on better constant handling?

There’s Base.@pure, is that what you’re looking for?

@pure is never what anyone is looking for.


Yeah, Base.@pure isn’t great because its contract isn’t really documented and the compiler won’t always tell you if you’ve violated the contract (I think), but also because it only gives the compiler the option to evaluate your function at any time, rather than forcing / guaranteeing compile time evaluation. The solid guarantee of compile time evaluation is the thing I really care for: without that, it’s easy to change something accidentally and have constant propagation break in a critical area, without any error - just a performance degradation.

Generally, this is one of my main concerns with writing and maintaining Julia code: if you hit exactly the right path you get great performance, but if you accidentally deviate from it nothing breaks - the code just runs more slowly. This flexibility is incredibly useful in some situations, but there are many situations where you really don’t need it, and you just want a hard guarantee of some behaviour.


One solution is to write benchmarks. PkgBenchmarks is quite useful to compare performance across Julia and package versions. Granted, it takes more work to cover all functions, but it’s also more powerful as any change affecting performance will be caught (not just inference).


It’s contract is documented fine.
It just isn’t what people think it is.
It’s name is wrong.

Its not for you, it isn’t part of the public API of the julia language.
There is a reason it isn’t exported.

1 Like

One of the easiest ways to test for constant propagation is to check @test 0 == @allocated myfunction(x)

it isn’t a cure-all.
It is a nessecary but not surficient conditon.

1 Like