# Make entire function constant fold

Right now, in BetterExp, I have code that looks like

MAX_EXP(::Val{2},  ::Type{Float64}) =  1024.0                   # log2 2^1023*(2-2^-52)
MIN_EXP(::Val{2},  ::Type{Float64}) = -1075.0                   # log2 2^-1075
MAX_EXP(::Val{2},  ::Type{Float32}) =  128f0                    # log2 2^127*(2-2^-52)
MIN_EXP(::Val{2},  ::Type{Float32}) = -149f0                    # log2 2^-1075
MAX_EXP(::Val{ℯ},  ::Type{Float64}) =  709.7827128933845        # log 2^1023*(2-2^-52)
MIN_EXP(::Val{ℯ},  ::Type{Float64}) = -745.1332191019412076235  # log 2^-1075
MAX_EXP(::Val{ℯ},  ::Type{Float32}) =  88.72284f0               # log 2^127 *(2-2^-23)
MIN_EXP(::Val{ℯ},  ::Type{Float32}) = -103.97208f0              # log 2^-150
MAX_EXP(::Val{10}, ::Type{Float64}) =  308.25471555991675       # log10 2^1023*(2-2^-52)
MIN_EXP(::Val{10}, ::Type{Float64}) = -323.60724533877976       # log10 2^-1075
MAX_EXP(::Val{10}, ::Type{Float32}) =  38.53184f0               # log10 2^127 *(2-2^-23)
MIN_EXP(::Val{10}, ::Type{Float32}) = -45.1545f0                # log10 2^-150

I would much prefer to be able to write this as something like

f(::Val{N}, ::Type{Float64}) where {N} = 1023*log(N, 2) + log(N, 2-2^-52)

since it would be much cleaner. However, the compiler seems like it isn’t smart enough to do this constant folding automatically. Is there a way to force this function to compile down to a constant?

1 Like

If there’s only a short list of values, you could use @eval:

for N in (2, exp(1), 10)
value = 1023*log(N, 2) + log(N, 2-2^-52)
@eval f(::Val{\$N}, ::Type{Float64}) = \$value
end
3 Likes

This is precicely the intended usecase for @generated functions.

#+BEGIN_SRC julia
@generated f(::Val{N}, ::Type{Float64}) where {N} = 1023*log(N, 2) + log(N, 2-2^-52)

@code_typed f(Val(32), Float64)
#+END_SRC

#+RESULTS:
CodeInfo(
1 ─     return 204.79999999999998
) => Float64

Be sure to read and understand https://docs.julialang.org/en/v1/manual/metaprogramming/#Generated-functions-1 though.

The essential thing is that you want to write them like

@generated function foo(x::T, y::U)
# compute some things based on T and U, stuff here happens at compile time.
# You can't use things like eval or Core.Compiler.return_type here.
quote
# An expression based on the results of the above computation.
# This expr will get lowed to the runtime function body.
# If you put a value here rather than and Expr, your @generated function
# be a constant function that just returns that value for any input of the
# correct types.
# The code that is returned here can't contain closures.
end
end
5 Likes

What are the pros and cons of the generated approach vs the macro approach? Obviously the generated version allows for arbitrary base arguments. Does it have a drawback?

@generated functions are for compile time computations when types are available. Macros are for parse-time computations where only syntax is available.

You can’t use compiler internals inside an @generated function and you can’t return a closure or certain other impure constructs. Note that the notion of the word ‘pure’ here is a little strange. For instance, array mutation is fine.

I’d argue that the @generated function approach is the correct one here because what you’re doing is really a type domain computation, not a syntax transformation. I’d encourage you to read the docs on them though, as I won’t be able to give you as accurate a picture of their limitations or structure.

3 Likes