Heavy macro use or not?

Julia macros are powerful, capable of creating its own DSLs and so on. However, the generated code can be really hard to predict. Some could argue that the bird eye’s view of the code functionality is more important than the precise control flow used to achieve that functionality, thus, one should use macros to create a DSL that expresses the necessary details about the needed functionality whenever it is more compact than expressing it as functions. Some could argue adding functionality as a macro makes it hard to compose the functionality with others. Let’s make a poll to see which camp you’re on. Funnily enough, SciML has a guideline against that, but does that anyway in ModelingToolkit.

  • Heavy use of macros
  • Avoid excessive macros
0 voters

I think that the wording of this poll will bias the results towards the second option. Even most people who like the use of macros wouldn’t encourage a “heavy” or “excessive” use.


Oops… my bad. Let’s change the poll.

  • Use macros to create easy interface to complex code.
  • Avoid complex macros.
0 voters

Our codebase has 34,000 lines of Julia, and 6 macros.

One of those macros is pretty critical and enables our in-house DSL, shaving thousands of lines off our codebase. It’s 60 lines long, pretty complex, and has been the source of a lot of bugs – but still very very high value. Our DSL is a little weird since it looks like normal Julia code, but lazily evaluated by building a DAG.

The remaining five I would describe as conveniences. I would miss them, but they’re nowhere close to essential. They’re also all much smaller, ~5-10 lines each.

I’m not sure where that falls in your poll, but I would guess “heavy use”.


Some related discussion here: How to discourage macros: disable automerge of General registry PRs?

I still feel almost all macro definitions I see in the ecosystem are unnecessary and regrettable.

Macros have caused some of the strangest errors I’ve encountered in any programming language. For instance, in Julia, writing f (x) results in an error because of the space between the function name and the parentheses.

Sorry, I do not understand what this have to do with macros. This is the syntax of the language,with or without macros.


Macros transform code, that’s their use case. The issue is understanding when you need to transform code, and transforming code needlessly is obviously terrible. The rule of thumb from an old talk is consider writing functions, higher order functions, then macros; it turns out base Julia didn’t need to define many macros, and many of them just process more convenient function call code before calling an also useful function that does the real work. Calling macros however is routine (how else are you going to @inline?)


The original rationale for disallowing f (x) was related to macro semantics but it’s also problematic in constructions like [f (x)]. See make `f (a,b)` an error · Issue #7232 · JuliaLang/julia · GitHub for the full story.


That’s true, but the array is special syntax anyway, so f (x) is not to blame here. Similar to [2 +5im], any one would expect 2 elements.

1 Like

Macros can be fantastic (if not strictly necessary) for eliminating boilerplate, e.g. in BaseDirs.jl I have

@setxdg DATA_HOME "~/.local/share"
@setxdgs DATA_DIRS ["/usr/local/share", "/usr/share"]
@setxdg CONFIG_HOME "~/.config"
@setxdgs CONFIG_DIRS ["/etc/xdg"]
@setxdg STATE_HOME "~/.local/state"
@setxdg BIN_HOME "~/.local/bin"
@setxdg CACHE_HOME "~/.cache"

Each invocation of @setxdg is saving 3 lines (4 → 1) and improving code clarity. There are 49 uses of this macro across the codebase, IMO this is well worth it.

In DataToolkit.jl, I define a few macros such as @addpkg, @require, and @advise.

These could all be removed in favor of functions, but once again clarity and readability would suffer. Manually passing in @__MODULE__ every time you invoke a function (when it’s a commonly called function) is also just a bit of a pain.

String macros are also rather handy, as an example I’d pick the IMO indispensable ~500 line (yes, it’s that big) string macro in StyledStrings.jl.

styled"{bold,(bg=yellow):hey} {green:there $you}"

# The above expands to (cleaned up):

    AnnotatedString("hey there ",
                    [(1:3, :face => :bold),
                     (1:3, :face => Face(background=:yellow)),
                     (5:10, :face => :green)]),
    let you_str = string(you)
        you_len = ncodeunits(you_str)
        if you_str isa AnnotatedString && !(isempty(you_str))
                            vcat([1:you_len, :face => :green],
            if isempty(you_str)
                AnnotatedString(you_str, [(1:you_len, :face => :green)])
    end) |> annotatedstring_optimize!

Maybe I’m missing something, but I would find it hard to classify the above as “unnecessary and regrettable”.


Macros can be awesome for providing convenient access to complex APIs.

But I utterly hate cases where macros are the primary documented API. Even worse are cases where macros are the only documented public stable API!

Look at e.g. BenchmarkTools for how not to do it. It is nigh impossible and mostly undocumented how to use that without macros.

In a saner world, they would document a function makeBenchmarkable(expr::Expression) that e.g. creates some benchmarkable object from an expression, and then would describe that e.g. @btime et al are thin wrappers over that function (take the expression, makeBenchmarkable, then run the benchmark).

This is also the case in the C world. It is utterly disgusting to have a C library with publicly documented macro-only APIs. Come on, please describe the exported symbols of your shared library, not your C-centric syntactic sugar.

That’s not a “real” complex macro.

It’s just a decision to not use language keywords for stuff like @goto / @label / @inline and instead overload the macro machinery. Same for the new atomics stuff.

I can live with that, but this is not the kind of macro use that is contentious.

1 Like

Even better, makeBenchmarkable could take a callable object.

It’s difficult to say from just the invocation snippet, but this seems like it should have been a single data structure construction call, perhaps just a named tuple, instead of seven different (macro) calls?

1 Like

For reference, @setxdg CONFIG_HOME "~/.config" expands to

    chopsuffix(ENV["XDG_CONFIG_HOME"], Base.Filesystem.path_separator)

it’s not complicated, but you can’t get around the boilerplate from having XDG_CONFIG_HOME[] + "XDG_CONFIG_HOME" without a macro. Is this worth exposing to a user? No, but it’s nice for internal usage.

The rest of my examples are more public-facing.

1 Like

You mean like run(b::Benchmark[, p::Parameters = b.params]; kwargs...)?

That’s certainly not without macros, the Benchmark instance is made in the expression returned by @benchmarkable. But that’s the thing, this isn’t a generic instance-in instance-out situation, you are actually transforming code and evaling functions in the global scope besides making that Benchmark instance, which is justifiably a macro’s jurisdiction.

Macros are really functions deep down, so if you really want to you can get at the part between the Meta.parse and eval steps; the cleanest stable way I know is macroexpand(inputModule, :(@inputmacro $(inputExprs...))). This and an eval could be wrapped in a function that applies a macro to a runtime Expr at a global scope, but not many will prefer an inconvenient cosmetic change @foo blah()foo(@__MODULE__, :(blah())) to transform source code. Note that such a function isn’t a drop-in replacement for a macro call in general because of how multiple macro calls in local scopes are expanded before a global eval. Not everything does its work in the global scope like BenchmarkTools.

1 Like

There is no reason to make the API for setting XDG config variables a macro. You’re only using the a macro in order to relate the name of the variable to the default, via string ops.

The package that originally defines the string variables already knows these relations. So it could have

module XdgStuff
const XDG_defaults = Dict{Base.RefValue{String}, String}()
const XDG_CONFIG_HOME = Ref("")

function setxdg(r::Base.RefValue{String}, v::String)
key = XDG_defaults[r] #maybe throw with a better error message if users try to set unregistered xdg configs?
r[] = (haskey(ENV, key) && !isempty(ENV[key])) ? chopsuffix(ENV[key], Base.Filesystem.path_separator) : v

allowing for

setxdg(XDG_CONFIG_HOME, "~/.config")

The string stuff can also be done via macros, but these would not be user-facing API, they would be used to define the exported XDG config variables and associate them to the environment variables.


That is exactly what my example is. Reducing boilerplate in package internals.

See my other examples for public-facing macros.

1 Like

What advantage does this give you as a macro instead of as a function dispatching on a CONFIG_HOME singleton? I assume the macro generates different code based on what OS you’re running on, but that could also be handled as a branch in a function, right?

struct ConfigHome end
const CONFIG_HOME = ConfigHome()
function setxdg(::ConfigHome, value)
    XDG_CONFIG_HOME[] = if haskey(ENV, "XDG_CONFIG_HOME") && !isempty(ENV["XDG_CONFIG_HOME"])
        chopsuffix(ENV["XDG_CONFIG_HOME"], Base.Filesystem.path_separator)

Honestly I don’t mind macros except when they cause problems by not composing, so I couldn’t care less about internal macros; I’m just curious why you structured it this way. I feel like I’d have definitely reached for a function.

Laziness/conciseness: Given I’ve got about a dozen invocations of @setxdg per OS, writing a singleton for each variable assignment would be a fair bit of boilerplate.

It’s just a simple little macro that works and makes the internals a little nicer :slightly_smiling_face:

1 Like