Semantics of constant folding for type constructors

I would like to expose an API in a package to the users as a simple set of parametric (singleton) types, with internally calculated parameters, using the smallest set of symbols as possible, without exposing type constructors. Eg

calculate_C(N,M) = N + M # placeholder for a more complicated calculation

struct Foo{N,M,C}
    function Foo{N,M}() where {N,M}
        @assert N isa Int && M isa Int && N > 0 && M > 0
        C = calculate_C(N, M) # user does not and should not care about C

where N and M are integers.

A design choice I have arrived at could be

@inline f(N::Int) = Foo{N,1}()

Base.literal_pow(::typeof(^), ::Foo{N,M}, ::Val{y}) where {N,M,y} = Foo{N,M*y}()

so the user can do f(3)^4 to construct Foo{3,4}().

But this relies on constant folding, the semantics of which is unclear to me:

  1. Do I need the @inline?

  2. Is the simple form f(::Int) guaranteed to work inside functions (when called with constants), or should I expose a Val{N} argument?

  3. Is there any other operator that has a constant-lowering version like literal_pow?

(Incidentally, why does literal_pow dispatch on the function type? Any way to customize this too and use it for other functions?)

I don’t think you need it on f, but you’ll want it on the constructor you’re calling. Ultimately, the compiler needs to be able to propagate the Int to that constructor, otherwise you’re going to incur a type instability in the code that calls f (together with dynamic dispatch to your constructor).

You can carry these sorts of calculations very far; lifting the values to the type domain allows eliminating the error check, for example, but they’re very brittle.

From my own experiments with this approach, it doesn’t really matter. As long as the compiler knows that the arguments are constant, it should work just the same (and on the flip side, be just as brittle).

Not sure what exactly you’re thinking if here, but string interpolation lowers to calls to string IIRC.


Constant folding is a compiler optimization, as such its only semantics are that it mustn’t change any semantics. The observable behavior of the program must be independent of whether a compiler optimization was applied.

I don’t think there can be any guarantees about whether a compiler optimization will apply, formally speaking at least. They work on a best-effort basis. That said, if optimization fails you’ll get run time dispatch, which can be detected in an automated manner (@allocated, JET.jl) and you’ll get imperfect inference, which you can also test for using Test.@inferred. Entries with run time dispatch are also colored in CPU profiler flamegraphs with a special color, red, I think. @code_typed and friends can confirm the optimizer was successful, Cthulhu.jl can be useful for debugging, etc., you probably already know most of this already.

In practice, if the input to f is a constant literal, it’s as good as immediately constructing the singleton type. Basically, you’ve got a value and want to move it into the type domain, you probably want to do it as soon as possible, but the distinction between, e.g., Val(3) and Val{3}() is nil in practice. The compiler is smarter than that.

Regarding literal_pow via ^, you probably don’t need that, that’s only useful if you want to use a different function depending on whether a value is a constant literal or not. And exposing that in your interface makes for a very ugly interface IMO.

So the hack could be extended to other operators/functions in the future if necessary.