Automatic function generation for finite-difference stencils using metaprogramming

I’m working on a basic finite difference code that can have a wide variety of finite-difference stencils based on the simulation’s parameters. In short, there is a “general case” from which all of the stencils are derived. But sometimes certain features aren’t needed, and a shorter version of the stencil could be used to significantly enhance performance.

There are hundreds of possible variations… and it would be easiest to maintain this combinatorial explosion using code generation. I’m thinking it would be great if I could somehow describe each element of the stencil “programmatically” in a dict, and somehow iterate over that to generate a new function for each relevant combination.

I’ve read through the Julia metaprogramming article, along with various online tutorials. But I’m looking for something a bit more focused. Particularly something that highlights the “don’t repeat yourself” philosophy (in turn, most of the documentation I read is a bit more abstract).

Is anyone aware of some good resources that describe Julia’s capabilities in this regard (before I spend too much time reinventing the wheel)?

you probably don’t want to generate code yourself for this. instead, just put optional parts inside if statements that are resolvable at compile time.

Thanks, @Oscar_Smith, for the reply!

just put optional parts inside if statements that are resolvable at compile time.

The problem with that is my stencil goes inside a loop. So wrapping hundreds of conditionals that get looped several times is a performance bottleneck. Instead, I was hoping to generate the entire loop+stencil structure for each possible combination.

DynamicGrids.jl and Neighborhoods.jl (work in progress) do something like this for filters and cellular automata neighborhoods, because there are a lot of possible sizes and shapes.

I don’t use a dict, I just generate neighborhood shapes with an equation in a @generated function. Like this one generates the offsets in a Von Neumann neighborhood of radius R and dimension N:

That’s pretty much it, a basic algorithm for every possible Von Neuman neighborhood at compile time, building a :tuple expression by checking manhattan distance and pushing arguments. The output is a Tuple of all the neighborhood offsets that I then wrap in an SVector so map will compile away over the offsets, and you can use it to index, calculate distance whatever and it will be at compile time.

(also note if statements don’t run at all if they are on types, the compiler will just elide all of a branches code completely. So the main idea here, for that or for generated funcitons like mine, is to get the parameters you need at compile time into the type system as type parameters)

2 Likes

Maybe you should consider Symbolics.jl, as described in this video (I’ve linked to the timestamp discussing stencils:
https://youtu.be/xQRuywWqc4s?t=1880

1 Like

The dispatch table in Julia can be thought of as a dict that holds code entries…

This is not a problem when the if statement is resolved at compile time.

2 Likes

The dispatch table in Julia can be thought of as a dict that holds code entries…

Cool! Is there a way to expand this functionality beyond argument type?

This is not a problem when the if statement is resolved at compile time.

Great point. Still, it would be nice to leverage some generation technique so that I don’t have to manually write and maintain 300+ case statements.

Great suggestion, @Raf!

To add a bit of context, I’m using the ParallelStencil.jl package, which uses some function macros (e.g. @parallel) to define my stencil kernels. Wrapping that construct with an additional @generated macro is proving to be a bit tricky… but I’ll keep playing with it!

I wouldn’t do that, mixing macros rarely works. Write your own @generated functions that you call inside the @parallel call.

Notice that my generated function is very small, tight function that is called inside other functions which themselves run inside parallel tools (in my case KernelAbstractions.jl).

Do the smallest possible thing you can. As I mentioned with using SVector, you can get the compile time computation to propagate to different parts of the code that don’t introduce any runtime information, using e.g. map. Work in a modular way instead of trying to make huge generated functions.

Yes, there is a thing called “value types” that let you dispatch on values (integers, strings, etc) as well. For example

f(::Val{1}) = "hello"
f(::Val{2}) = "world"
h() = string(f(Val(4-3)), ", ", f(Val(1+1)))
1 Like

Although you may be better served building them into your parameter structs than using lots of Val types:

For example:

struct MyModelParams{A,B,C,D}
       runtime_param::D
end
MyModelParams{A,B,C}(runtime_param::D) where {A,B,C,D} = 
    MyModelParams{A,B,C,D}(runtime_param)

f(::MyModelParams{A}) where A = 2A # Compile time computation

Then A, B, and C can be whatever bitstype objects you need at compile time.