Macros and standard evalution


I’ve been using this in ChainMap for a while, inspired by hadley’s non-standard evalutation paradigm.

Imagine a function

function binary(a, b, c)
    Expr(:call, b, a, c)

I’ve been using a macro called @nonstandard to generate the code below automatically:

@nonstandard binary
# expands to
macro binary(es...)

Which means you could then do stuff like

[1, 2] == @binary 1 vcat 2

But it seems like you could skip the this whole process altogether, including the macro definition, simply by having @binary simply always refer to the binary function.

All macros would be also available as standard evaluation functions. You could use them for mapping, reducing, etc.
(Less) macroexpand debugging!
More intuitive, at least for me.

Might require shuffling around the namespace. For example, @time and time both exist in Base, I think.


What are you proposing?
It’s a bad idea to make macro and functions to point to the same code since they almost never do exactly the same thing.


Sorry if I wasn’t clear. I’m not sure if I’m using the right vocabulary here.

So basically, if I define

function binary(a, b, c)
    Expr(:call, b, a, c)

Then, without any extra steps,

[1, 2] == @binary 1 vcat 2

Basically, in this case, @f is syntax for during parse time, bring arguments in as expressions, pass to f, then insert the resulting expression into the code. Named macros wouldn’t exist anymore.

For my purposes, it’s almost always useful to have access to the standard evaluation version of a macro.


Correct, that’s exactly what I said is a bad idea.

This is rarely useful. Definitely not in public API, rarely in internal APIs. Even then, most of the time, just emitting the call to another macro is enough.

What’s wrong with macroexpand? With @macroexpand there’s even less to type. Printing in the macro should also work on 0.5 and master.

Quite the opposite. The macro at definition time is a clear signal that the function is meant to generate expression, which is rarely the case for a normal function (I doubt any public API functions can be used as a valid and meaningful macro). Similarly, related to this point but unrelated to the proposal, the @ at runtime clearly signals custom syntax.

This essentially means that the change is undoable. Again, I don’t know a single case where a public function and a public macro share the exact same implementation.


Hmm. Ok, I see what you’re saying about this not being useful for a lot of users (who aren’t doing meta programming). Now that I think about it, though, at least the time issue could be handled using multiple dispatch (defining a method for ‘time(e::Expr)’ )


No it can’t. @time a or @time 1.0 is perfectly valid and should continue to be. You can of course try to add all kinds of signatures to either but then you should realize that the two has nothing to do with each other and shouldn’t conflict in the first place.

I never find it useful when doing meta-programming. Apart from the small saving of typing of macroexpand, if you are calling macros as functions a lot in your code, it may suggest that you are not using macros correctly. e.g. in all cases I can think of you can just use @binary directly in another macro that uses it. (FWIW, your binary is not a hygienic macro)


Hmm. I think we’re talking past each other?

If the code from macro time gets moved over to time(e), then @time a and @time 1.0 will continue to be valid, I think? time() could continue to return the current time.

The macroexpand was more of a bonus, but imagine, instead of
macroexpand(:(@time a) )
you could just do

Here’s an example of how you might want to reuse binary.

function map_macro(e)
    Expr(:call, :map, e.args...)
function map_binary(es...)
    map_macro(binary(es...) )

I also wasn’t suggesting ending hygiene scrubbing. Unless I’m confused, it would work perfectly well in this proposal to have @f scrub the results of f for hygiene, unless there was an esc in the expression f returns.


This isn’t the signature you posted before. And @eval @time $(...::TmStruct) should still be valid too.

It actually looks pretty terrible and won’t work, you still need to quote a. @macroexpand @time a is pretty clean and requires no quoting so it’ll be much cleaner for any non-trivial input expressions.

The reason this is bad is because you are constructing a valid macro return and then parse it again in another function. It’ll be very unclear for the reader what expectation you have on both macro/functions since there are trivial changes that’s valid for a normal macros that aren’t valid for these anymore (e.g. binary cannot just have an overall esc and map_macro has to accept esc as input which a normal macro never does).


I would encourage you to spend a few weeks reading the literature on Fexpr’s to learn why the majority of the Lisp community decided that Lisp’s now-eradicated equivalent of R’s non-standard evaluation was a bad feature in programming languages. Particularly worth reading is Kent Pitman’s paper on “Special Forms in Lisp”, which features this very important section:

In this paper, the motivations for using special forms are discussed, followed by a summary of the advantages and disadvantages of employing MACRO’s, FEXPR’s, and NLAMBDA’s as tools for their implementation. It is asserted that MACRO’s offer an adequate mechanism for specifying special form definitions and that FEXPR’s do not. Evidence is given which supports the author’s contention that FEXPR’s interfere with the correct operation of code-analyzing programs such as the compiler. Finally, it is suggested that, in the design of future Lisp dialects, serious consideration be given to the proposition that FEXPR’s should be omitted from the language altogether.


Great! Thanks for the helpful article. I’m not sure I understand it, and I’m also not sure what I was proposing corresponds exactly to Fexpr (in fact, non-standard evaluation might be a misnomer), but I am glad someone smarter than me has thought about this thoughtfully.