Codegen woes

Hello all,

There’s an expressiveness problem I’ve been fighting with for quite a while now. I had thought (and hoped, really) I might just be missing something obvious. But from discussions at JuliaCon, it seems at least @dpsanders, @jpfairbanks, and @ChrisRackauckas have had some similar problems.

I’ll try to boil the problem down to a simple case. There’s a risk in doing this I might reduce the problem to one that has an easy but non-generalizable solution, or that I might not quite align with others’ use cases. Anyway, here goes…

Say your library has a function makecode that takes an x::T and produces an Expr. You’d like the user of your library to be able to pass their own x (or more likely, one they build using other tools you provide) and generate fast code.

There have been a few suggestions that might address this in at least some situations:

  • @ChrisRackauckas had earlier suggested there may be a way to use nested macro calls to do this. But at this point I may not have been understanding the problem well enough to describe it well, so I don’t know if this is still a candidate.
  • @mohamed82008 suggested expressing T in the type system, in order to use a generated function. This is an elegant solution, but type-level programming presents its own challenges. Even if this is the preferred approach, it would be great to be able to prototype DSLs more quickly than this approach allows.
  • @tim.holy suggested (maybe jokingly?) that hacking the method table could help to get this working.
  • The obvious goto (and my current approach) uses invokelatest. This is generally discouraged.

It seems there must be a way, however hacky, to allow this to be done easily and efficiently. My understanding of invokelatest had been that it creates a sort of boundary that’s expensive to cross but harmless otherwise.

But this seems not to be the case. Here’s a weird little example that attempts to abstract away some of my current workflow:

f(x) = quote
    function foo(a)
        a + $x
    end
end
​
function g(x)
    fx = f(x)
​
    quote
        $fx
​
        function h(start)
            s = start
            for j=1:10000
                s += foo(j)
            end
            return s
        end 
    end |> eval
​
    Base.invokelatest(h, x)
end

Performance is… not great:

julia> @btime g(2)
  7.831 ms (23946 allocations: 1.35 MiB)
50025002

Any ideas?

3 Likes

I posted a solution for a fast “invokelastest-like” function here:

https://github.com/JuliaLang/julia/pull/32737

I think there are real use cases for a function like this, but of course it could be abused.

3 Likes

Seems my approach has other problems, invokefrozen(h, Int, x) actually does worse. Or maybe it’s because I’m on v1.1.1?

julia> @btime g(2)
  8.559 ms (25957 allocations: 1.46 MiB)
50025002

I’ll need to do some profiling

You’re measuring compilation time because every call to g is compiling. I have no idea how it would do w.r.t. compile time, it just has good runtime speed. You’d generate h once and then call that repeatedly in a loop, right? You want to measure your h calls, not the h compilation if that’s the case.

Haha so I am :man_facepalming:

I was testing how to take the symbolic derivative of a user’s function and build a function for that. The test I was running was:

using ModelingToolkit, BenchmarkTools
f(x) = x^4 - 3x^3 + 46*(x-x-x-x)
@variables x

function to_expr(O)
  if O isa ModelingToolkit.Constant
    return O.value
  elseif isa(O.op, Variable)
    isempty(O.args) && return O.op.name
    return Expr(:call, O.op,name, to_expr.(O.args)...)
  end
  return Expr(:call, Symbol(O.op), to_expr.(O.args)...)
end

function generate(O)
  if isa(O.op, Variable)
    isempty(O.args) && return O.op.name
    return Expr(:call, O.op,name, to_expr.(O.args)...)
  end
  return Expr(:call, Symbol(O.op), to_expr.(O.args)...)
end

function differentiate1(f)
  @variables x
  @derivatives D'~x
  op = f(x)
  op2 = expand_derivatives(D(op))
  ex2 = to_expr(op2)
  eval(:($(x.op.name) -> $ex2))
end

function _differentiate(f)
  @variables x
  @derivatives D'~x
  op = f(x)
  op2 = expand_derivatives(D(op))
  ex2 = to_expr(op2)
  _f = eval(quote
    $(x.op.name) -> begin
    $ex2
    end
  end)
  (rt,x) -> superdeadlyunsafe_invokelatest(_f,rt,x)
end

@inline @generated function superdeadlyunsafe_invokelatest(f, ::Type{rt}, args...) where rt
  tupargs = Expr(:tuple,args...)
  quote
    _f = $(Expr(:cfunction, Base.CFunction, :f, rt, :((Core.svec)($args...)), :(:ccall)))
    return ccall(_f.ptr,rt,$tupargs,$((:(getindex(args,$i)) for i in 1:length(args))...))
  end
end

function differentiate2(f)
  _f = _differentiate(f)
  x -> _f(typeof(x),x)
end

_df1 = differentiate1(f)
_df2 = differentiate2(f)
_df1(6)
_df2(6)
_df1(6.0)
_df2(6.0)

@btime $_df1($6) # 12
@btime $_df2($6) # 12
@btime $_df1($6.0) # 12
@btime $_df2($6.0) # 12

and other experiments here: https://github.com/JuliaDiffEq/ModelingToolkit.jl/pull/155#issuecomment-515735993 . This measures out to be 5ns overhead over the function itself, vs 60ns.

3 Likes

That’s really nice. I love the idea of passing the return type as an argument, that makes a lot of sense

@ChrisRackauckas your solution still uses eval to define the function body. Any idea on how to eliminate the use of that?

No, it’s using Julia’s compiler at runtime so it needs it. Is there anything wrong with it other than aesthetics?

Inference in other functions?

But this infers fine?

Ok so it is almost a function barrier but with eval. Interesting…

With the return type required to be given by the user, and then the compiler handles it just fine.

What about the scope of eval? Which module is it evaluating in? Main?

Depends on where you want it to eval. It’s the same eval.

Yes but I think in @cscherrer’s case, the body of the function evaled can have arbitrary user code including code from modules not visible inside the module defining the function which calls eval. This inevitably means that we cannot eval in the defining module. So we can either always eval in Main which limits the use to REPL and the likes, not inside other modules, or eval in an input module but I don’t know if this would have any side effects.

To give a concrete example, let’s say Soss.jl has a function transform_and_sample that given a model definition, transforms the body, evals the model, and samples using some MCMC alg. Then DiffEqBayes wants to define a model with a DiffEq solver in it and call Soss.jl. The model would then need to be evaled in DiffEqBayes not Soss or Main.

Why not just use Soss.eval or Main.eval? I put the eval usage on the user side so they can make the right choice.

I think we are using 2 definitions for “user”. The user that you mean is probably the user of the above function who is a package developer, e.g. Chad. The user that I mean is someone who wants to define a model and pass it to some function Soss.sample to do its magic. The second user shouldn’t have to explicitly eval the model IMO. But maybe that’s a small price to pay. So calling eval is on the end user.

And if that’s the case, then every non-exported name in the body of the model needs to have its namespace with it, e.g. Soss.func.

1 Like

I’m not sure of a better solution so I went with this one. Somewhere along the line someone needs to choose where the generated generics live, and to always get generic functions “working” the way the user wants it seems you need to let the user make the choice.

Indeed that is the case, and you see that in the generators of ModelingToolkit:

1 Like

You can kinda use FunctionWrappers.jl for this as well

using FunctionWrappers: FunctionWrapper
using BenchmarkTools

function foo(expr)
    g = @eval function f(x, y)
        $expr
    end
    gwrap = FunctionWrapper{Int, Tuple{Int, Int}}(g)
    @btime $gwrap(1,2)
end

 foo(:(x + y))
 #  29.306 ns (0 allocations: 0 bytes)
4 Likes