Are there idioms in Julia for fast Algebraic Data Types (ADT)?

Update: I found a way to encode closed recursive ADTs in Julia and benchmarked this solution against a naive solution. Unfortunately, it performs worse (4.7μs vs 3.3μs for the naive solution), probably due to having to do more allocations.

If anyone finds a better way, please tell me!

Naive solution

abstract type Expression end

struct Const <: Expression
    value :: Int
end

struct Var <: Expression
    varname ::String
end

struct Add <: Expression
    lhs :: Expression
    rhs :: Expression
end

evaluate(e::Const, env) = e.value
evaluate(e::Var, env) = env[e.varname]
evaluate(e::Add, env) = evaluate(e.lhs, env) + evaluate(e.rhs, env)

function sum_of_ints(n)
    if n == 1
        return Const(1)
    else
        return Add(Const(n), sum_of_ints(n - 1))
    end
end

using BenchmarkTools
@btime evaluate(sum_of_ints(100), Dict{String, Int}())
3.228 μs (202 allocations: 5.23 KiB)

Solution that encodes recursive closed ADTs

struct Const
    value :: Int
end

struct Var
    varname ::String
end

struct Add{E}
    lhs :: E
    rhs :: E
end

struct Expression
    ctor :: Union{Const, Var, Add{Expression}}
end

mkConst(value) = Expression(Const(value))
mkAdd(lhs, rhs) = Expression(Add{Expression}(lhs, rhs))
mkVar(var) = Expression(Var(var))

evaluate(e::Expression, env) = evaluate(e.ctor, env)
evaluate(e::Const, env) = e.value
evaluate(e::Var, env) = env[e.varname]
evaluate(e::Add, env) = evaluate(e.lhs, env) + evaluate(e.rhs, env)

function sum_of_ints(n)
    if n == 1
        return mkConst(1)
    else
        return mkAdd(mkConst(n), sum_of_ints(n - 1))
    end
end

using BenchmarkTools
@btime evaluate(sum_of_ints(100), Dict{String, Int}())
4.681 μs (396 allocations: 8.25 KiB)

Maybe I’m missing something, but isn’t it nice that the simple solution is faster? At any rate, I don’t think the performance difference between your two solutions is particularly large.

@CameronBieganek What this experiment indicates, in my opinion, is that the compiler misses an opportunity to optimize the second version. In a perfect world, the second version should be faster as the compiler would leverage the fact that an expression can be nothing else other than a Const, a Var or an Add, and make dynamic dispatch very fast based on this.

So a more interesting comparison would be to compare the time it takes to evaluate the naive version in Julia with an equivalent program written in a language with native ADTs such as OCaml, Haskell or Rust. I am going to try this now.

So I did the experiment in OCaml, which is about twice as fast as the Julia version (1.61μs vs 3.23μs). This is actually not too bad for Julia and this makes me feel better about using ADTs in Julia.

Benchmark Code

type expr =
  | Const of int
  | Var of string
  | Add of expr * expr

let rec evaluate expr env =
  match expr with
  | Const v -> v
  | Var x -> List.Assoc.find_exn env ~equal:String.equal x
  | Add (lhs, rhs) -> evaluate lhs env + evaluate rhs env

let rec sum_of_ints = function
  | 1 -> Const 1
  | n -> Add (Const n, sum_of_ints (n - 1))

let profile n =
  let acc = ref 0 in
  let t = Caml.Sys.time () in
  for i = 1 to n do
    acc := !acc + evaluate (sum_of_ints 100) []
  done;
  let dt = (Caml.Sys.time () -. t) /. (Float.of_int n) in
  Stdio.printf "Average time: %.3f μs" (dt *. 1e6);
  acc

let _ = profile 1000000
Average time: 1.653 μs
2 Likes

This is similar to open types, and static exhaustive checking wouldn’t get affected if your analyzer can walk through the whole program.

I’m sorry that MLStyle didn’t address this performance issue.

Actually I did consider the questions you raised here, and due to the restrictions of Julia I don’t really find out an approach.

1 Like

Also, there is a technique to alter ADTs, called tagless final.

ADT approach is called initial approach in some context, and tagless final is called the final approach in this scope.

For your code, we can use tagless final, to achieve stably typed Julia program:

struct SYM{F1, F2}
    constant :: F1
    add :: F2
end

function constant(v)
    function (sym::SYM)
        sym.constant(v)
    end
end

function add(term1, term2)
    function (sym::SYM)
        sym.add(term1(sym), term2(sym))
    end
end

# self algebra
self = SYM(constant, add)

evaluate =
    let constant(v::Int) = v,
        add(l::Int, r::Int) = l + r
        SYM(constant, add)
    end


println(add(constant(2), constant(3))(evaluate))
@code_warntype add(constant(2), constant(3))(evaluate)

There’re no red points, try above codes in your Julia shell

5
Variables
  #self#::var"#17#18"{var"#15#16"{Int64},var"#15#16"{Int64}}
  sym::Core.Compiler.Const(SYM{var"#constant#19",var"#add#20"}(var"#constant#19"(), var"#add#20"()), false)

Body::Int64
1 ─ %1 = Base.getproperty(sym, :add)::Core.Compiler.Const(var"#add#20"(), false)
│   %2 = Core.getfield(#self#, :term1)::var"#15#16"{Int64}
│   %3 = (%2)(sym)::Int64
│   %4 = Core.getfield(#self#, :term2)::var"#15#16"{Int64}
│   %5 = (%4)(sym)::Int64
│   %6 = (%1)(%3, %5)::Int64
└──      return %6
6 Likes

Thanks @thautwarm! I’ve heard FP people talk a lot about final tagless, and I’ve explored it a bit. But I still don’t really understand its full potential, or any limitations. Are there situations you would definitely reach for this, or any where you would avoid it?

BTW for a nice interface for your example, you can do

julia> (sym::SYM)(term) = term(sym)

julia> evaluate(add(constant(2), constant(3)))
5
2 Likes

Like @cscherrer, I am still not sure that I can see the full potential of “final tagless ADTs”.

Also, I do not see how it improves on the simple solution from @Mason:

abstract type Expression end
struct Const{T} <: Expression
    value :: T
end
struct Add{L, R} <: Expression
    lhs::L
    rhs::R
end

evaluate(e::Const) = e.value
evaluate(e::Add) = evaluate(e.lhs) + evaluate(e.rhs)

In both case, some specialized code is generated to evaluate a single, specific expression (or, more rigorously, a small family of expressions sharing the exact same tree structure) and so no tags are needed. Dispatch does not happen at runtime but during JIT compilation.

However, I have a hard time finding a lot of situations where this is really what you want. When working with a large number of expressions, you probably do not want to compile one version of the evaluation function per expression to evaluate. And even when working with a small number of expressions, can the savings that result from avoiding dynamic dispatch outweigh the increased cost of JIT compilation?

1 Like

One pain point that I’ve never seen solved in Julia is building trees top-down, for example for a decision tree. You might try something like

abstract type Tree

struct Branch{L,R} <: Tree
    left :: L
    right :: R
end

struct Leaf{T} <: Tree
    value :: T
end

But the L and R values aren’t known until the tree is done. Could final tagless be better for this?

2 Likes

@cscherrer Thanks for the wrapping!

Tagless final can do everything ADTs and GADTs can do, and IIRC there shall be some menchanical methods to transform your code from ADT approach to tagless final approach.

The advantages of ADT(initial approach) and final approach are different:

When you’re using ADTs, things’re totally straightforward because you see concrete data

ADTs(tagged unions) are signals, data and descriptors.

When you want to use them, you’re supposed to write interpreters/evaluators for your ADTs, like writing pattern matching to decide constructor-specific behaviors.

When you’re using tagless final, you manipulate operations extracted from the ADT data, and the data turns out to be unnecessary

Tagless final

  • avoids the use of too many tags(hence, tagless). It catches the observation that, what you’ll finally do(operations) with your ADTs can live without ADTs.

  • encodes the ADTs to post-order visiting functions, which can be composed to make bigger operations, just like how we compose ADTs to construct recursive data.

  • has many other advantages, say, it can be used to achieve pattern matching without metaprogramming libraries like MLStyle.jl or Match.jl, and the implemented pattern matching is naturally first-class.

tagless final is usually more concise because it contains a post-order visiting, which you shall manually implement when you’re using ADTs.

Unfortunately, above code to transform ADTs to tagless final does not directly apply to all Julia code, because Julia is a strict programming language.

Given a term Add(left, right), you might want to visit Add node firstly, and might ignore the visiting of left and right, i.e, you want pre-order visiting.

However, with tagless final, the sub-components in Add(left, right) are always firstly visited, so, to support pre-order visiting or other visiting strategies, things can be a little disturbing.

Hi, I agree with you.

I just want to post some thing here to give your a point of view, that, tags are in fact unnecessary, because we just match tags and then perform operations among them.

The final operations are always what we actually need.

For example, if we just come to your example, tagless final encoding is also concise(my previous reply just shows many underlying structures):

struct HowWeWorkWithExpr{F1, F2}
    constant :: F1
    add :: F2
end

evaluate = 
     let constant(v::Int) = v,
         add(l, r) = l + r
         HowWeWorkWithExpr(constant, add)
     end

It’s equivalent to Mason’s code, because the initial approach is actually almost equivalent to the final approach.

Hence I have to say there’re no improvements.

However, I personally prefer tagless final because in this way

  • we don’t need too many type parameters for each constructors
  • we don’t need abstract types
  • fewer global variables
  • fewer data types

Further, if you try to make a larger ADT and its evaluation, you might find some interesting convenience that tagless final brings about:

you always do not need to write code for picking fields/contents from data, which makes me feel good for my shoulders…

4 Likes

Thanks Chad, this is really a good example!

See this:

struct Tree{C1, C2}
    branch :: C1
    leaf :: C2
end

# polymorphic branch/ `branch` constructor
branch(left, right) =
    function run(mod)
        mod.branch(left(mod), right(mod))
    end

# polymorphic leaf/ `leaf` constructor
leaf(value) =
    function run(mod)
        mod.leaf(value)
    end


# eval to Int
tree_sum_eval = Tree(
    +, # how to evaluate branch
    identity # how to evaluate leaf
)

# eval to a function (indent::String)::String
tree_print = Tree(
    function (left, right)
        function run(indent)
            println(indent, "-")
            left(indent * "  ")
            right(indent * "  ")
        end
    end,
    function (value)
        function run(indent)
            println(indent, value)
        end
    end
)

# a tree

tree =
    branch(
        branch(leaf(1), leaf(2)),
        branch(
            leaf(2),
            branch(leaf(5), leaf(2))
        )
    )

tree(tree_print)("")

println(tree(tree_sum_eval))

run it:

λ julia a.jl
-
  -
    1
    2
  -
    2
    -
      5
      2
12
2 Likes

Thanks for your detailed answer and for your great work on MLStyle.jl. :slight_smile:

1 Like

Thanks @thautwarm, I made a little playground to explore these ideas:

Oh, looks good!
Besides, I think the so-called delimited contitunation can also solve the problem of building trees top-down.
However, the problem of using tagless final and delimited continuation is, they don’t actually build a concrete tree node, before built up the children nodes. Things are just postponed.

1 Like

Have you tried delimited continuation in Julia? I’ve used continuations but haven’t gotten my head around the delimited version.

I think TaglessTrees and IRTools could be good together for abstract interpretation. Make everything a call, then decide what call should do for a given interpreter

1 Like

No, I haven’t. I’m now learning about it in my courses, it has some amazing use cases. (However I don’t really prefer this extremely complex thing).

Good observations!

Yes, everything becomes a call then everything is hookable, and an interpreter can be simply extended(derived) from an another.

You just inspired me, and I feel this is cool, user-friendly:

Tree{C1, C2} = NamedTuple{(:branch, :leaf), Tuple{C1, C2}}

evaluator :: Tree
new_evaluator = (;evaluator..., leaf = some_extension(evaluator.leaf))

looks good…

2 Likes

I found this very puzzling, too. So I looked into what the compiler is doing.

(I put the “naive solution” code in NaiveSolution module. So that’s why you are seeing NaiveSolution. prefix.)

This is what the optimized function does at the beginning:

julia> @code_typed NaiveSolution.evaluate(NaiveSolution.sum_of_ints(2), Dict{String,Int}())
CodeInfo(
1 ── %1  = Base.getfield(e, :lhs)::Main.NaiveSolution.Expression
│    %2  = Main.NaiveSolution.evaluate::Core.Compiler.Const(Main.NaiveSolution.evaluate, false)
│    %3  = (isa)(%1, Main.NaiveSolution.Const)::Bool
└───       goto #3 if not %3
2 ── %5  = π (%1, Main.NaiveSolution.Const)
│    %6  = Base.getfield(%5, :value)::Int64
└───       goto #12

This is equivalent to

if e.lhs isa Const
    e.lhs.value
end

i.e., the compiler eliminate the dynamic dispatch to evaluate(::Const) and then inline the method body.

The compiler does this for Var:

3 ── %8  = (isa)(%1, Main.NaiveSolution.Var)::Bool
└───       goto #9 if not %8
...

and for Add

9 ── %24 = (isa)(%1, Main.NaiveSolution.Add)::Bool
└───       goto #11 if not %24
...

Interestingly the compiler creates the branch where e.lhs is none of above (and infers that its return type is Int), too:

11 ─ %29 = Main.NaiveSolution.evaluate(%1, env)::Int64
└───       goto #12
...

So, the compiler generates what is equivalent to

lhs = e.lhs
if lhs isa Const
    ...
elseif lhs isa Var
    ...
elseif lhs isa Add
    ...
else
    ...
end

where ... in each if branch inlines the relevant definition of evaluate.

Then there is a piece of similar code for e.rhs.

Interestingly, it looks like the compiler can prove that everything is evaluated to Int. So, it can eliminate the dynamic dispatch for + and infer the return type:

12 ┄ %31 = φ (#2 => %6, #8 => %20, #10 => %27, #11 => %29)::Int64
...
23 ┄ %62 = φ (#13 => %37, #19 => %51, #21 => %58, #22 => %60)::Int64
│    %63 = Base.add_int(%31, %62)::Int64
└───       return %63
) => Int64

I think what this example indicates is that, since the compiler knows the whole type tree and the method table, we don’t need to tell it that the Expression is closed.

Though I’m still puzzled why the “closed ADTs” version is slower than the “naive” version. Looking at the typed IR, it looks like the quality of inference is equivalent. It’d also be nice to know what kind of optimization is missing w.r.t the OCaml example.

That is because the construction of AnySignedInteger is more expensive.
If we preallocate things in this way:

open_data = SignedInteger[posvec; negvec]
closed_data = AnySignedInteger[posvec; negvec]
 function test_open()
       return sum(value(x) for x in open_data)
end
 function test_closed()
        return sum(value(x) for x in closed_data)
 end


julia> @btime test_closed()
  142.756 ns (0 allocations: 0 bytes)
0

julia> @btime test_open()
  288.318 ns (0 allocations: 0 bytes)
0

Then closed is faster.

I tried pre-allocating the expression object:

function benchmarkable()
    @benchmarkable evaluate($(sum_of_ints(100)), Dict{String, Int}())
end

but the “naive” solution is still faster for me:

julia> run(NaiveSolution.benchmarkable())
BenchmarkTools.Trial:
  memory estimate:  608 bytes
  allocs estimate:  4
  --------------
  minimum time:     827.000 ns (0.00% GC)
  median time:      941.000 ns (0.00% GC)
  mean time:        1.186 μs (0.00% GC)
  maximum time:     35.646 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

julia> run(ClosedADT.benchmarkable())
BenchmarkTools.Trial:
  memory estimate:  608 bytes
  allocs estimate:  4
  --------------
  minimum time:     1.274 μs (0.00% GC)
  median time:      1.505 μs (0.00% GC)
  mean time:        1.673 μs (0.00% GC)
  maximum time:     36.319 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

And I don’t get why the closed solution can be faster for evaluate. The “naive” solution is already doing the union splitting. Looking at the IR, it’s also true for SignedInteger. I think the difference in speed is rather the layout of the array (this part of Julia internal is optimized to handle Union{T,Missing} etc.), not the user code.

(Also, closed was already faster for SignedInteger/AnySignedInteger case: Are there idioms in Julia for fast Algebraic Data Types (ADT)?)

Oh yes, sorry for this.

However after preallocation in my way, the difference becomes really minor.

The original code has a gap in terms of performance, in my computer.

julia> @btime test_open()
  1.600 μs (202 allocations: 4.92 KiB)
0

julia> println("Testing closed version")
Testing closed version

julia> @btime test_closed()
  612.139 ns (2 allocations: 2.02 KiB)
0