Working with ASTs

Hello all,

For Soss.jl I’d like a cleaner way of manipulating and reasoning about ASTs. Currently a Model is nearly just an expression:

hello = @model μ begin
    σ ~ HalfCauchy()
    x ~ Normal(μ, σ)
end

builds

julia> hello.args
1-element Array{Symbol,1}:
 :μ

julia> hello.body
quote
    σ ~ HalfCauchy()
    x ~ Normal(μ, σ)
end

Currently all of the manipulations are just moving things around using MacroTools. It works, but it feels very error-prone.

It feels like it could be useful to have something with typed variable binding information. For example (line by line)

@macro μ begin

Hmm, ok μ is bound but we don’t yet know the type

    σ ~ HalfCauchy()

Now we know σ is a positive real, μ is still bound but type is still unknown

    x ~ Normal(μ, σ)

After the final line, x is also bound, and we now know x and μ are both real.

I’m not sure exactly what I’m looking for, but it seems like if there were a typed system for representing this it could be very useful for the kinds of transformations I’m working toward. To be really fancy, careful typing could even disallow invalid transformations.

I had initially thought S-expressions could be useful:

julia> Meta.show_sexpr(hello.body)
(:block,
  (:call, :~, :σ, (:call, :HalfCauchy)),
  (:call, :~, :x, (:call, :Normal, :μ, :σ))
)

but there’s no type or binding info. code_typed is great but too low-level for the current goal.

Anything existing that can use for this, or suggestions for building something?

5 Likes

As suggested in a previous discussion, I would write this as a thin layer that uses well-defined composite types underneath: just have the macro translate to those, building a graph, do further manipulations with values, not expressions, and let the compiler take care of figuring out types.

4 Likes

There’s a lot of ways to plug into the compiler, depending on what types of transformations you need to do.

  • Macros map syntax to syntax
  • Generic function lookup maps type tuples to method implementations
  • Generated functions map type tuples to method generators, which produce syntax.
  • I’d say something about Cassette.jl if I understood it better and if I thought you needed it here :wink:

I this case I think @Tamas_Papp is right, you might get away with macros + a set of carefully designed generic functions.

To expand; macros are for syntax only; you should try not to guess the types based on syntax if at all possible, this will just limit your package and make it non-composable with types from other packages (which your package cannot possibly know about). Instead, you should design the syntax of your model DSL so that it can be unambiguously and correctly mapped to a lowered syntactic form. Likely this lowered form would be expressed as a set of function calls to functions within your package or functions from Base.

You can then provide this lowered form to the compiler which will create a value from it using generic function dispatch. You are then free to overload the functions of your lowered form as necessary and have the full power of generic dispatch to decide which implementation to select (and hence the type of the returned model).

3 Likes

Yes, that’s exactly what I’m trying to do. But I’d like many of the transformations to be usable interactively, so the expression -> value mapping can’t be “lossy” - needs to map back to syntax.

Haha yeah Cassette seems to be gaining some popularity as a catch-all recommendation for anything compiler-related. It seems to do some amazing things, but I don’t see how it would help in this case :slight_smile:

Right. I had thought maybe of only including information about variable bindings. But I could, for example, replace each x ~ dist with x = rand(dist) and then ask Julia to figure out the types for me.

There’s also the potential to use types to help pick apart the parsed syntax, in order to make sure things are put back together in a sensible way.

I’ve only done a small amount of compiler work myself, but I’ve built monad-based DSLs in Haskell, and I’ve been on teams with compiler experts building standalone “deeply embedded” DSLs. In Haskell, monadic approaches are easy to use but limit the available transformations. Deep embeddings are great but in my experience you end up spending way more time debugging the parser than I’m interested in. Plus macro-like Template Haskell has a lot of potential, but is awkward to use compared to what Julia can do. It’s not as tightly integrated as macros in Julia, and most Haskell folks I know tend to recommend against it.

Oops, that was a bit of a tangent. Back on track…

Yes, I wasn’t thinking of anything too fancy here. Since I’ll try to stay very close to expression-level work, this lowering will almost certainly just involve have standard Julia evaluation.

This in confusing to me, and makes me think we may be talking about two different things.

Currently I have two kinds of transformations: Model -> Model and Model -> Expr. These are awkward and error-prone to express, so maybe it could help to write these as

Model -> IR -> IR -> Model
and
Model -> IR -> IR -> Expr.

In each case, the first as last in the chain could be written once and for all, so we’d only have to fiddle with IR -> IR transformations.

Does that make sense?

I would start from the other end: create a DSL without macros that helps build the DAG of the model, then think about how macros would help simplify it.

1 Like

@Tamas_Papp this is very close to what I have. The @model macro literally does nothing with its second argument; the whole thing just returns a Expr. The first argument is held separate because this was a lot easier to manage than carrying around function expressions. And I can already get the DAG:

julia> dependencies(lda)
10-element Array{Pair{Array{Symbol,1},Symbol},1}:
               [] => :α
               [] => :η
               [] => :K
               [] => :V
               [] => :N
             [:N] => :M
     [:η, :V, :K] => :β
     [:α, :K, :M] => :θ
     [:M, :θ, :N] => :z
 [:M, :N, :β, :z] => :w

It’s not really macro-based at all, just expression manipulation.

So how would you construct the DAG directly? I have to admit that I may find that cleaner, but that is a personal preference :wink:

The DAG’s not really the challenge, unless you mean doing that differently would make other things easier. A simple example of a transformation that could be useful is “non-centering”, e.g.

x ~ Normal(μ,σ)

becomes something like

x_raw ~ Normal(0,1)
x ~ Delta(μ + σ * x_raw)

Edit: I have to get going now. Thank you for this discussion, I’ll try to make the context of what I’m trying to do clearer when I get a chance

That would really help, as I am not getting the gist of the problem.

This could well be :slight_smile: A concrete small example of the different flavors of transformations you need would be really helpful. I did read the Soss readme, but came away none the wiser about precisely which implementation problem we’re talking about.

Looking at your original question again, it does look like your model is symbolic. Which is fine as far as it goes but you won’t get the compiler to help you with types without actually evaluating that AST.

  • Cassette normally maps type tuples + contexts to method implementations
  • Cassette passes maps type tuples + contexts + method IR to method IR, where IR is intermidiate representation (basically the compiled method implementation)

But yes, cassettte isn’t useful for this.
Cassette is mostly useful for if you want to interfere with the running of code you can’t, or don’t want to edit, based on where that code is called from.
(Or if you have particular inclinations around dynamic scope or a few other things that contextual dispatch make easier to implement.)

3 Likes

I think understanding exactly what the problem is, is part of the problem. This and slack are my only contact with other Julia users, and these interaction are really an immense help. Thanks :slight_smile:

Maybe an example would help. Here’s the code that connects to your TransformVariables.jl library:

function getTransform(model)
    expr = Expr(:tuple)
    postwalk(model.body) do x
        if @capture(x, v_ ~ dist_)
            if v ∈ parameters(model)
                t = fromℝ(@eval $dist)
                push!(expr.args,:($v=$t))
            else x
            end
        else x
        end
    end
    return as(eval(expr))
end

I do this kind of thing all over the place, and it seems like it should be getting much easier than it is. It feels like there’s a lot more boilerplate than there needs to be, and there’s a lot of mutation that’s hard to reason about. I get it wrong a lot, which makes the whole process a lot slower.

So I’m trying to figure out…

  1. Is there some AST-oriented library that would make this easier?
  2. Would purely functional manipulations with something like S-expressions work better?
  3. Or maybe there’s a way to clean up this approach, like maybe helper macros to cut down the boilerplate?
  4. Or maybe there’s a path to thinking about this approach that will make it go more smoothly?

BTW, getTransform isn’t done… It works if dist is a distribution and has fixed parameters, but breaks if it depends on other arguments. It would be better just to avoid @eval by looking up e.g., Normal in a dictionary to find the correct transform. But even then, for something like Uniform the support could depend on other variables, making something like an “uncenter” tranformation a requirement.

1 Like

Thanks for the example, I’m a bit out of my depth in the statistical modelling domain :slight_smile:

Are you aware that @eval (or eval) evaluates dist in the global scope of the containing module? This might not be what you intended! It’s likely to lead to all sorts of confusion regarding scoping of the model variables. (Also, @eval $dist is just eval(dist).)

What we’ve been suggesting further up is that the model expression should probably be evaluated much earlier — directly in the scope where @model was called. Your @model macro can enable this by inserting syntax so that the returned expression is normal julia code which can be evaluated (this is what I mean by lowering).

To understand what I’m getting at, imagine that

  • you’re not allowed to call eval
  • you’re only allowed to manipulate ASTs within macros

Design your library to live with these limitations and I think the problems you’re struggling with will be resolved.

Maybe I’m misunderstanding your point, but I think interpolation works as an end-run around global scope in this case. Example:

julia> x = 3
3

julia> f() = begin
           x = :(4 + 5)
           @eval $x
           end
f (generic function with 1 method)

julia> f()
9

My original approach was to manipulate the AST using macros. But this was a complete mess, mostly because hygiene kept getting in my way more than helping. Eventually someone (I think maybe @dfdx?) suggested I try manipulating expressions with functions, and things have gone much more smoothly since I made that change.

BTW, I’ve heard the advice to avoid eval, but as I understand this is mostly in the context of calling it from a macro. In the current approach I sometimes use @eval $x followed by Base.invokelatest (in a function, not a macro), and it seems to cause no trouble. If this is dangerous, I’d like to better understand the risk.

We had this discussion before :wink:, and I think we may just prefer different conceptual models. You seem to want to store the model structure in an AST and extract information from it. This is feasible, but with a lot of kludges like eval that I think may prevent the compiler from doing various optimizations.

What I would to is have functions (including constructors) and constants instantiate composite types which represent the model, and keep transforming that, keeping everything in value space.

I think that, except for some limited exceptions, the use of @eval & friends is a code smell, and the first thing I would do is refactor without it. As @c42f said, it may have unexpected consequences.

2 Likes

Nope, the scoping behavior is exactly the same as eval. The docstring says “Evaluate an expression with values interpolated into it using eval.”

Example:

julia> function f()
           y = 5
           x = :(y + 5)
           @eval $x # What is the scope of this eval?
       end
f (generic function with 1 method)

julia> y = 1
1

julia> f()
6
1 Like

This is good advice (especially when testing the manipulations made by a macro), but I would generally only call those functions from within the macro from which the AST arises.

Here’s another fun example of the scoping problems with eval:

julia> function g()
           @eval blah=10
       end
g (generic function with 1 method)

julia> g()
10

julia> blah
10
2 Likes

Yeah but I still haven’t figured it out. Thanks for your patience :slightly_smiling_face:

I don’t understand how any of this could negatively impact the compiler. The whole point is to act as a kind of model preprocessor, well before the compiler sees any of it. It should act as if the user had entered the code by hand.

I still love this idea, and I still don’t see how to implement it concretely. I will meditate on this.

Ahh, good example. So it seems all is fine if you know you’ve interpolated all of the variables, and in that case the only potential problem may be a hit to performance. But if you accidentally forget a $, that variable will be evaluated in global scope and could wreak all kinds of havoc. That totally makes sense.

So maybe as a rule of thumb, any pure expression manipulation can be a function, but anything involving eval should be refactored with macros to avoid it, if at all possible. I’ll need to look into this some more.

“From which it arises”? I don’t think that works in my case. The whole point is to carry around a Model as code in order to explore transformations. This way model manipulations can be done programmatically or in the REPL. eval is used later in the pipeline (for example when doing inference), but maybe this can be by wrapping that bit in a macro. I had tried this at one point with no luck, but I’ll try some more.

Another good example, thank you :slight_smile:

And here we get to the heart of the matter! Can you give the simplest possible example where you think the model manipulation has to be done at the AST level, and not inside the original macro? You could well be right, but exploring an example will be instructive either way.

Ok here’s a simple case. Say I have

m = @model dist begin
    μ ~ dist
    x ~ Normal(μ, 1)
end

Think of this maybe like curried function, I haven’t yet decided on a distirbution for mu ,but still need to pass in a dist. Then at a later point I can maybe try different values, for example
m(dist = Normal(0,1)), which should just introduce the corresponding binding and return a new model with arguments.

~ hasn’t yet been defined; this new model is later turned into normal Julia code depending what kind of inference we want to do.

I need to call it a night, but I really appreciate your time helping think this through. Back to my day job in the morning, hopefully back to this soon :slight_smile: