I am writing a function where array operations, such as the example below, have to happen a few hundred thousand times.
arr = [1,0,0,1,1]
idx = [2,4]
res = arr[idx]
When using a profiler to analyse the code 89.69% of the time is spent calling getindex. Is there an alternative to using arrays, or some way to improve runtime?
How large are the arrays? You could use something like https://github.com/JuliaArrays/StaticArrays.jl if they are small (under 20 elements). If you don’t mutate your results you could use views too, so you don’t allocate a new array every time.
Reusing the aray could also help if it is possible.
Please take a look at the docstring for the @inbounds macro. Things like idx in your example are probably better as tuples. If your data all binary, you probably want an array of Bool rather than Int64. Also probably worthwhile to give the “Performance Tips” page of the Julia manual another read-through.
As @gbaraldi already said, if your actual code uses small arrays or vectors like your MWE does, you need to be using StaticArrays for this. The speedup is unreal (yet very very real) – sort of surreal (really!)
You could maybe use a view, res = @view arr[idx]. Without more context it is impossible to say. Doing arr[idx] a bunch of times will take time since you need to allocate a new array every time, so rewrite the code not to do that.