I would much prefer to be able to write this as something like
f(::Val{N}, ::Type{Float64}) where {N} = 1023*log(N, 2) + log(N, 2-2^-52)
since it would be much cleaner. However, the compiler seems like it isn’t smart enough to do this constant folding automatically. Is there a way to force this function to compile down to a constant?
The essential thing is that you want to write them like
@generated function foo(x::T, y::U)
# compute some things based on T and U, stuff here happens at compile time.
# You can't use things like eval or Core.Compiler.return_type here.
quote
# An expression based on the results of the above computation.
# This expr will get lowed to the runtime function body.
# If you put a value here rather than and Expr, your @generated function
# be a constant function that just returns that value for any input of the
# correct types.
# The code that is returned here can't contain closures.
end
end
What are the pros and cons of the generated approach vs the macro approach? Obviously the generated version allows for arbitrary base arguments. Does it have a drawback?
@generated functions are for compile time computations when types are available. Macros are for parse-time computations where only syntax is available.
You can’t use compiler internals inside an @generated function and you can’t return a closure or certain other impure constructs. Note that the notion of the word ‘pure’ here is a little strange. For instance, array mutation is fine.
I’d argue that the @generated function approach is the correct one here because what you’re doing is really a type domain computation, not a syntax transformation. I’d encourage you to read the docs on them though, as I won’t be able to give you as accurate a picture of their limitations or structure.