Hi, I’m relatively new to Julia but feel like I’m mostly getting the hang of it.
I have a project where I need to parse a large number of arbitrary expressions (e.g. ax^b, a+bx^2, etc.) into a function, but I’m running into significant performance issues with my current implementation. My implementation involves generating an array of anonymous functions which are then parsed into the main function and evaluated, but this evaluation is much slower than if the expressions are explicitly defined in the main function.
A minimal example is as follows (note that this example only involves one form of expression, to allow for the comparison with the explicit definition):
using BenchmarkTools
coefficients = rand(1000)
exponents = rand(1000)
y = zeros(1000)
functions = []
for i in 1:1000
push!(functions, x -> coefficients[i]*x^exponents[i])
end
function callfunc(functions,x,y)
for i in 1:1000
y[i] = functions[i](x)
end
end
function explicitfunc(coefficients,exponents,x,y)
for i in 1:1000
y[i] = coefficients[i]*x^exponents[i]
end
end
@btime callfunc(functions,1.23,y)
@btime explicitfunc(coefficients,exponents,1.23,y)
The returned btimes are
193.799 μs (7467 allocations: 116.67 KiB) # callfunc
61.686 μs (0 allocations: 0 bytes) # explicitfunc
Furthermore, if I then run the for loop in parallel with @threads on a 4 core cpu, the times are 180.434 μs and 17.477 μs, respectively.
I’ve gone through the documentation, performance tips, and online forums as much as I could and I haven’t been able to figure out a way to improve the performance. Is there a way to parse expressions directly into the function and evaluate them in the local scope (e.g. like eval(Meta.parse(“expression”)) for a local scope), or else is there a way to eliminate whatever overhead is causing the slowdown and allocations in the evaluation of the anonymous functions?
Any help would be greatly appreciated.