Good Practises for making performant DSL's? (My Dissertation Topic)

Hiya,

I’m currently a final year student and my supervisor is into scientific computing. He has a tool that makes heavy use of a DSL written into python and he wants me to investigate whether writing DSL’s in Julia are quicker than python.

I have attempted to write my very first basic DSL which just does symbolic differentiation for basic algebraic expressions, which can be found here: https://github.com/AndrewMonteith/DSLs/blob/master/algebra.jl. I tried to make heavy use of the type system and multiple dispatch to make my language compact (Is this a good idea normally?)

In the code I linked, I’m confused about the behavior of the bottom 2 statements.
Line 122: 27 allocations and 760 bytes.
Line 123: 10.8k allocations and 494kb.

Line 123 is only adding an additional term though, which I would have though is just a few more structs. So is there an exponential increase in allocations?

Further to my above question, I haven’t been able to find many resources on how to make good DSL’s. Any suggestions on resources/general tips for good DSL practices in Julia?

Cheers,
Andrew

My latest drive is actually to make DSLs by not having them be DSLs. For example, ModelingToolkit.jl is a compiler IR which can automatically translate numerical code in its IR to do DSL-like transformations.

http://tutorials.juliadiffeq.org/html/ode_extras/01-ModelingToolkit.html

Thus while there are DSLs like Pumas.jl which utilize ModlingToolkit.jl as its internal IR (so that multiple DSLs don’t have to re-write the same symbolic routines), there is in some sense no reason to require the user to write into a DSL if you can automatically perform program analysis on a (limited) form of standard code.

But for building DSLs, I think this idea of building a standard IR that DSLs lower to, and then optimizing the one IR and giving it a large number of features to make it easy to build new DSLs, is the right standard practice for the future. Since if you look at it, most DSLs spend most of their time doing the same things (calculating Jacobians, Hessians, etc.), so it might as well just get standardized through a “model IR and compiler”.

Pushing even further, we show that one can directly perform program analysis through non-standard interpretation. For example, finding sparsity patterns:

https://openreview.net/pdf?id=rJlPdcY38B

We mention in the conclusion that optimization DSLs, like JuMP.jl which I would point to as one of the best designed DSLs out there, could possibly be replaced by automated program introspection. Given the Cassette.jl tooling we showcase in that paper, it’s clear to see how one can automatically determine whether something is linear, quadratic, convex, etc. automatically from code, along with building whatever alternative functions an interface asks for, and thus requiring the user to utilize a DSL may be fully replaced with sufficiently smart analyses. We are now working with the JuMP developers in their next version NLP DSL to see if these ideas can be put to use.

4 Likes
struct Variable <: AbstractExpression
    var::String 

    Variable(sym::Symbol) = new(string(sym))
end

Using a string for every name is pretty heavy. That might be your issue.

If you’d like to collaborate let me know. For doing a lot of transforms, you may want to look at the Rewrite.jl + Simplify.jl stack which we are building for the purpose of doing the standard simplification routines.