Hi all. I would otherwise post in Internals & Design but I’m not allowed to. So I’ve looked around the discourse and elsewhere but would still like to better know why llvmcall
works the way it does, and if there are other better escape hatches somewhere.
My current bottleneck is compile-time overhead, which is why I’m exploring intervening at a lower level.
The following is verboten in Julia:
#llvmtest.jl
using Base: llvmcall
get_ir(x, y) = """
%1 = add i32 $x, $y
ret i32 %1
"""
function llvm_add(x, y)
ir = get_ir(x, y)
llvmcall(ir, Int32, Tuple{})
end
@show llvm_add(2, 3)
#julia llvmtest.jl
ERROR: LoadError: error statically evaluating llvm IR argument
Stacktrace:
[1] llvm_add(x::Int64, y::Int64)
@ Main ~/projects/julia-learning-internals/llvmtest.jl:19
[2] macro expansion
@ show.jl:1181 [inlined]
[3] top-level scope
@ ~/projects/julia-learning-internals/llvmtest.jl:22
I can fix this toy example with generated, but that’s a class of solutions I’m trying to avoid because it will end up in exploding method tables.
@generated function llvm_add(::Val{x}, ::Val{y}) where {x,y}
ir = get_ir(x, y)
return quote
llvmcall($ir, Int32, Tuple{})
end
end
@show llvm_add(Val(2), Val(3))
1) Why does this limitation exist?
I get that I might be breaking some internal guarantees about what methods exist and so on, but in the end I just want to create a function by mashing together separate bits of LLVM IR I have laying around into a “meta-function” and execute that many times, quickly. The hope is to avoid (some of) the compile overhead of the “meta-function”, if I already have the IR available for the individual functions I compose in the “meta-function”.
2) Given my aims, are there other escape hatches than llvmcall
that could be useful?