If I run @benchmark f(1,d) and @benchmark g(1) (in the REPL), both take the same amount of time to run (which is what I kind of excepted). Is there a way to write a function equivalent to g(x) to run as fast as the sum of two float points? I am thinking that maybe using macros could work because they are evaluated at parse time instead of execution time, but I don’t know how to do it.
This does not really solve my issue, g(x) needs to be defined through f(x,d). The context in which I am using this is more somthing like
my_fun = function(f)
d =generate_data()
g(x) = f(x,d)
# do things with g(x)
return result
end
So, in some cases the function f(x,d) has some values that, “intuitively”, would not need to be computed at each call (such as in the first example), but in other cases, it does not.
Basically, it sounds like you need to refactor your code so that expensive calculations can be precomputed (and passed as parameters, possibly hidden within an opaque data structure) if desired.
For example, A \ b (where A is a square matrix) calls lu(A) \ b, so if you want you can precompute the expensive LU factorization F = lu(A) (stored in an opaque “factorization object” data structure) and then re-use it for subsequent solves F \ b.