Macro in macro with

macros

#1

I have a macro (say m1) which might or might not be defined, depending if user included a file or not. To tackle the problem I am trying to create m2 which calls m1 if module x it is defined in is loaded.

Then m1 and module x is not loaded the following code works as expected

macro m2(ex)
    println("@m2")
    :(
        if @isdefined(x)
            :( @m1 $($ex))
        else
            println("x is not defined")
            $ex
        end
    )
end

function f1()
    @m2 println("function f1")
end

f1()

If I later actually load the module and call f1 again

#typically happens using inlude("some_file.jl")
macro m1(ex)
    println("@m1")
    quote
       println("@m1")
       $(esc(ex))
    end
end
x = 3

f1()

f1 does not get wrapped in m1. i.e. I get the following output

julia> f1()
function f1
:(#= test.jl:5 =# @m1 nothing)

in fact I get the same output even if I don’t define m1 and just define x

Note, m1 in practice is defined in a 3rd party package, I don’t have much control to modify it. ideally I would love m2 to deal with all uncertainties

of course the following works (and used current in our code). But then it has heavy dependency on the order modules/files are loaded

macro m2(ex)
    if @isdefined(x)
        :( println("@m2"); @m1 $ex)
    else
        println("x is not defined")
        :(     println("@m2"); $ex )
    end
end

Would be cool to be able to write a macro m2 which can behave differently once user loaded the module


#2

It’s really not possible to implement @m2 quite the way you’re imagining here because macro expansion of the body of f1 happens only once at compile time.

What you seem to be asking for is that the loading of x triggers the execution of some code which redefines f1 using the @m1 utility. You can probably use Requires.jl to hook the loading of x and do this, but changing the behavior of a function dynamically like this sounds like a bad idea to me! In some selected situations (adding debug code, perhaps) it might be useful. A bit hard to say without knowing the real names of @m1 and x.


#3
  1. It is not that robust to condition macro expansions on a global state and macro expansion time, since you cannot control when that happens — ideally, a macro should be a deterministic mapping from one AST to another (except for gensym substitutions),

  2. You may be able to condition results of methods on a global state much more easily; in the special case of loading an extra module just overwrite the methods there, otherwise use a global (eg a const container of some sort).

In case you are familiar with macros from another Lisp-like language: in Julia they also play an important role for syntax transformations, but we have much more idiomatic tools for zero- or low-cost abstraction. If you give more context, you may get suggestions for a more idiomatic solution.


#4

Thanks for replies. I was hoping to learn how macros expand on more simple examples.

However, here is more background.

We have a piece of code which might run on both CPU and GPU. depending where users want it (some laptops don’t have discrete GPUs)
then we run code on GPU, we want to use @sync macro from CuArrays: https://github.com/JuliaGPU/CuArrays.jl/blob/master/src/utils.jl
yet if it is CPU processing, no macro should be used (this user/system wont even have CUDA related libs loaded)

as said, today we use the following

macro m2(ex)
    if @isdefined(CuArrays)
        :( CuArrays.@sync $ex)
    else
        println("x is not defined")
        :(  $ex )
    end
end

and we just make sure that
using CuArrays is loaded before f1 is parsed, in case we run on GPU. That works.

I was just thinking that one day I will forget the order in which includes should be loaded. The code still will run on GPU, just not wrapped in the sync macro, even if user later loaded CUDA libs and setup all the data on the GPU… I.e. it will not crash, just will cause different resource usage pattern.

So I was wondering if it would be possible to make more independent from loading order.

But of course, it could be that this is the case that it is easier to maintain the loading order, than to maintain a page long macro, which takes 1sec to run on each loop (which we don’t want either). I.e. the current way is actually very efficient code wise.


#5

I may be missing some detail, but I would just define functions along the lines of

function do_stuff(backend, ...)
end

and propagate backend down to the callees, and set up

struct CPUBackend end
struct GPUBackend end

(possibly with parameters that determine number of threads etc) and dispatch on it where it makes sense.