How to automate a laborous task (via Metaprogramming? Need to access variables of the current working space)

I’m writing a module where a user may create and analyze arbitrary “compositions” (expressions) of so-called CrystalOperators.

Right now, the manual workflow to analyze an arbitrary composition is quite laborious. I am pretty sure it is possible to completely automate the procedure via metaprogramming/macros. I tried many things but unfortunately I always run into problems which I’m not able to solve. Hopefully, you can tell me if and how such an automation is possible.

Workflow example

I give an example of a workflow of a potential user.

A user defines instances of CrystalOperators:
x_1::CrystalOperator, x_2::CrystalOperator, x_3::CrystalOperator
and for example other variables such as z::Int

He/she wants to analyze a “composition” or expression of these variables, for example f = x_1*x_2+z*inv(x_3). To this end, he has to carry out the following steps manually. (I want this to be automated):

  1. CrystalOperators are mathematical functions defined with respect to a “lattice”. In the first step we need to find a (least) common sublattice and convert all Crystaloperators accordingly. ( You may think of this step as a conversion to the same datatype x::Float64 + y::Int is computed via x::Float64 + y::Float64. I need to store x::Float64 and y::Float64 to proceed further. ):
co_vec = [x_1, x_2, x_3] # Create and Array of the  CrystylOperators within the expression f.
# Find the least common sublattice A:
A = nothing
for x in co_vec
	if A == nothing
		A = x.C.L.A
		A = lcm(A, x.C.L.A)
# Rewrite all Operators with respect to the least common sublattice A:
x_1 = wrtLattice(x_1, A)
x_2 = wrtLattice(x_2, A)
x_3 = wrtLattice(x_3, A)
  1. We need to make sure that this Expression f makes sense. (As an analogy you may think of these crystaloperators as matrices. Computing x_1*x_2+z*inv(x_3) makes only sense if the corresponding number of rows and columns are equal.)
catch e
	@show e
  1. If the above test was succesful, we may proceed with step 3. We define a function based on the expression f ( we basically replace each appearance of a CrystalOperator x with symbol(x,k) ).
sym_f = (k) -> symbol(x_1,k)*symbol(x_2, k)+z*inv(symbol(x_3, k))
  1. Finally, we are able to analyze our operator composition. Oftentimes we are interested in the eigenvalues of sym_f(k) (which is nothing else than a (square) matrix.) for many different values of k:
kvec = 0:0.1:1 # this 
eigs = [eigvals(sym_f(k)) for k in kvec]

How to automate this?

Is it possible (how?) to automate these steps? It is important that steps 1 and 2 are only executed once as the computation can be very complex. I would like to have a macro or function which executes steps 1-3 and spits out the function sym_f:

sym_f = @create_sym(x_1*x_2+z*inv(x_3))

The problem is, that I need to access and manipulate the variables of the current workspace.

Ideally, I would like to store copies of the variables and functions such as “sym_f” within a struct:

struct OperatorComposition

Then I would like to have a macro which creates such an OperatorComposition via
oc = @createOperatorComposition x_1*x_2+z*inv(x_3)
such that

oc.dict = Dict(:x_1 => x_1, :x_2 => x_2, :x_3 => x_3)` #  i.e., oc.dict[:x_i] is the output of step 1 (wrtLattice(x_1, A) 


oc.sym_f = (k) -> symbol(Ref(oc.mydict[:x_1]),k)*symbol(Ref(oc.mydict[:x_2]), k)+z*inv(symbol(Ref(oc.mydict[:x_3]), k))

Please let me know if anything is unclear. I’m grateful for any advice.

I would

  1. define methods for *, +, etc that operate on CrystalOperators and scalars (and whatever you need),
  2. these would return some type that can encapsulate relevant info, and also do the required checks
  3. for this type, define the relevant methods that are required for analysis.

I don’t think that metaprogramming is needed or useful for this problem.


The first tool in your toolkit should always be functions. Macros are primarily used to introduce new syntax. A good rule of thumb for macro usage is provided in this comment on a different thread: “you should know exactly what your macro is converting things to, otherwise the macro has gone too far.” In other words, macros are for transforming syntax, and functions are for computations. So normally when you are solving a new problem, you should first figure out how to solve it with functions (and perhaps user defined types). Then, maybe you can consider introducing new syntax with a macro that uses your functions under the hood.

I don’t remember much about crystal structure, so let’s consider the following simplified problem. We start with a point in space (represented by a position vector) and we have two possible operations on it: translation and rotation. Translation is represented by adding a vector to the position vector. Rotation is represented by multiplying the position vector with a rotation matrix. Let’s write a helper function that creates a rotation matrix for a given rotation angle in degrees:

function rotation(angle)
    [cosd(angle) -sind(angle);
     sind(angle) cosd(angle)]

Julia is already pretty smart about vector and matrix algebra, so if we try to add a rotation to a transformation, it won’t let us, because you can’t add a vector to a matrix:

julia> R = rotation(30)
2×2 Array{Float64,2}:
 0.866025  -0.5     
 0.5        0.866025

julia> T = [2, 3]
2-element Array{Int64,1}:

julia> R + T
ERROR: DimensionMismatch("dimensions must match")

Ok, so if we want to translate a point and then rotate it, how do we do it? Like this:

julia> v = [1, 1]
2-element Array{Int64,1}:

julia> R * (v + T)
2-element Array{Float64,1}:

Of course we can turn this into a function so that we can perform the same operation on any point:

julia> translate_then_rotate(v) = rotation(30) * (v + [2, 3])
translate_then_rotate (generic function with 1 method)

julia> translate_then_rotate([3, 5])
2-element Array{Float64,1}:

If regular vectors and matrices are not enough to describe your crystal operators, and you need more complicated behavior from * and +, then you can create your own user-defined types and add new methods to * and + so that the arithmetic operators have special behavior for your custom types.

For example, if you wanted to print a more informative error message when you try to add a rotation to a transformation, you could use something like the following.

struct Rotation

    # Define an inner constructor. This guarantees that
    # the matrix field contains a valid rotation matrix.
    function Rotation(angle)
        matrix = [cosd(angle) -sind(angle);
                  sind(angle) cosd(angle)]
        new(angle, matrix)

struct Translation

function Base.:+(::Rotation, ::Translation)
    throw(ArgumentError("Rotations and translations cannot be added."))

Base.:+(t::Translation, r::Rotation) = r + t

Let’s try it out in the REPL:

julia> R = Rotation(30)
Rotation(30.0, [0.8660254037844386 -0.5; 0.5 0.8660254037844386])

julia> T = Translation([2, 3])
Translation([2.0, 3.0])

julia> R + T
ERROR: ArgumentError: Rotations and translations cannot be added.

julia> T + R
ERROR: ArgumentError: Rotations and translations cannot be added.

Hopefully that gives you some ideas to get started.


Thanks for your suggestion, I thought about it for a while.

I can implement variants of the methods *,+,etc and may switch between different versions of these methods corresponding to the step1.0, step 1.1 ,step 2 and 3 (via a global variable?). I think (but I’m still not yet completely sure), that I am able to reduce these steps to the following workflow:

ComputationType = 'step1.0'
A = x_1*x_2+z*inv(x_3) # if the global variable ComputationType  == step1.0, then the least common multiple lattice of all CrystalOperators is computed
ComputationType = 'step1.1'
x_1*x_2+z*inv(x_3) # if ComputationType == step1.1, each CrystalOperator (x_1,x_2 and x_3) is modified with respect to the gloabl variable A. (( x_i = wrtLattice(x_i, A) ))
ComputationType = 'step2'
x_1*x_2+z*inv(x_3)  # if ComputationType == step2, then the check of step2 is executed.
ComputationType = 'step3'
sym_f = x_1*x_2+z*inv(x_3) # if ComputationType == step3, the function sym_f = (k) -> symbol(x_1,k)*symbol(x_2, k)+z*inv(symbol(x_3, k)) is returned.

To some extend this is simpler than the current workflow. But the implementation I have in mind feels quite inconvenient. Furthermore, I don’t know how I can automate it even further.

I don’t know enough about your problem to give very detailed advice, but my priority would be eliminating globals. This usually simplifies the code and the implementation, and usually meshes well with Julia’s performance model. Something like

struct ComputationType1 end

compute(::ComputationType1, expr) = ... # maybe a recursive implementation?

compute(ComputationType1(), A)

If I try to implement something like compute(::ComputationType1, expr), I would call it via
compute(ComputationType1(), :(x_1*x_2+z*inv(x_3))), right? (Or do you have something else in mind?)

The function definition would look something like

function compute(::ComputationType1, expr::Expr)
    A = nothing
    for x in expr.args
        ex = eval(x)
        if typeof(ex) <: CrystalOperator
            A = lcm(A, ex.C.L.A)
    return A

However, I do get an “UndefVarError: x_1 not defined”-error. How can I access the variable of the current working space?

Again, I would recommend solutions that don’t evaluate or assign anything in the global scope. In what I am proposing, expr would not be an Expr, but type that you define that contains a representation, which you would then call with values.

1 Like

Oh, If I understand your suggestion correctly, this would mean that I would have to define a method for each expression f?
I want that the user can freely choose/define the expression f he wants to analyze.

I am sorry but I do not understand your problem well enough to provide further help.

Thank you nevertheless for your effort.

Even though you (and also other people) advice against accessing or modifying variables of the global scope, I think this is the only way to provide the flexibility I want for the module. However, I don’t quite know how/if that is possible.

I agree with @Tamas_Papp and @CameronBieganek that all you need is to define the functions for your operators. To make it a little more concrete, have a look at the source code for rationals and LazyArrays.jl.

In particular, your step 1 does not need to be evaluated until during step 4. This is known as lazy (as opposed to eager) computation, where you delay the calculation of necessary quantities until you need them. That’s why I linked LazyArrays.jl - there are tons of good examples about how to build composed operators from user code and delay execution until needed. Here, that’s when you call eigvals(x::ComposedCrystalOperator) or eigvals(x::CrystalOperator), depending on your strategy.

Now, maybe you don’t even need to be that fancy, and you have some guarantees that op(x1::CrystalOperator, x2::CrystalOperator) = ::CrystalOperator and that everything is associative. Then you can determine the least common sublattice for that operation, check if that operation is allowed by checking the rows and columns of x1 and x2, and then return the new CrystalOperator. The rationals.jl code has a pattern similar to this. This approach is much easier to implement and conceptualize.

At the end of the day, your user will be able to supply arbitrary functions, with CrystalOperators as arguments, and be able to get their eigenvalues, all without any metaprogramming.

1 Like

Thanks for insisting on the fact that I can use functions:

I finally realized that I have to build the expression like that:

f = :($x_1*$x_2+z*inv($x_3))

With this expression and the use of MacroTools.postwalk, I was able to construct the functions I need.

(Up to now I always used f = :(x_1*x_2+z*inv(x_3)).)

1 Like