In my package, I’d like to offer a convenience function like this:
function gaussian(σ::Real=1.0)
@eval function (x)
exp(-abs2(x) / $(float(4σ)))
end
end
I want the @eval
because I don’t want the 4σ
to be computed at every evaluation
kernel = gaussian(3.0)
kernel(0.2)
After a long while, I realized that kernel
is not defined on all workers, and seemingly neither
@everywhere kernel = gaussian(3.0)
nor putting @everywhere
in the definition of gaussian
helps. Something that works with ordinary variables also doesn’t work
@everywhere kernel = $kernel
because Julia claims to not know what to interpolate on the r.h.s. (UndefVarError). How can I both keep the convenience of calling a function-generating function and have the resulting function defined everywhere?
Have you tried just doing this normally? There is fairly aggressive constant propagation these days.
1 Like
I have tried with some of the @code_
macros (most of which I don’t master very well, I should say), and it did seem to me like there was the floatmul(%3, 4.0)
(or something like this in the code. But maybe I was doing it wrong?
Have you benchmarked if this does anything to the runtime of the function? Since you have an exp call in there I would be surprised if it did.
Just did it.
julia> function gaussian(σ::Real=1.0)
@eval function (x)
exp(-abs2(x) / $(float(4σ)))
end
end
gaussian (generic function with 2 methods)
julia> using BenchmarkTools
julia> @benchmark $(gaussian(3.))(x) setup=(x = rand())
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 4.241 ns (0.00% GC)
median time: 5.776 ns (0.00% GC)
mean time: 6.128 ns (0.00% GC)
maximum time: 66.726 ns (0.00% GC)
--------------
samples: 10000
evals/sample: 1000
julia> function gaussian2(σ::Real=1.0)
function (x)
exp(-abs2(x) / float(4σ))
end
end
gaussian2 (generic function with 2 methods)
julia> @benchmark $(gaussian2(3.))(x) setup=(x = rand())
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 5.678 ns (0.00% GC)
median time: 5.825 ns (0.00% GC)
mean time: 6.194 ns (0.00% GC)
maximum time: 64.749 ns (0.00% GC)
--------------
samples: 10000
evals/sample: 1000
This will perhaps not be crucial in the overall process, but would be nice to avoid.
Why do you need @eval
for this? For example:
function gaussian(σ::Real=1.0)
let s = -1/4σ
x -> exp(s*abs2(x))
end
end
(Though I’m skeptical that the 3% performance difference in your benchmark is worth a specialized higher-order function here.)
1 Like
That was the only way I could come up with to avoid that (stupid) extra operation. I understand that this does not change the world, but it felt strange and I thought there should be a way to “precompute” this. Interestingly, when defined this way, there is no problem with kernel = gaussian(3.)
being passed to different workers, without being @everywhere
d. So, many thanks for your help!
Could someone explain the use of @eval
here and what the OP was trying to achieve? Suppose the OP didn’t have trouble with the parallelism, how does the @eval
work? If I do
kernel = gaussian(3.0)
kernel(0.2)
kernel2 = gaussian(4.0)
kernel2(0.2)
Wouldn’t that compute the 4\sigma
twice anyways?
Yes, but the intention is to apply the same kernel = gaussian(3.)
to many different arguments, i.e., kernel(x)
, and precompute as much as possible that depends on σ=3.0
.