I am having some issues with code and I can’t seem to reach a solution. Specifically I am re-writing some Matlab code and for one or two things I can’t seem to match the speed of Matlab.

Matlab Code:

a=[0:2^24-1]
b=exp(2*pi*a*i)

This is pretty fast but I can’t seem to match the execution in Julia:

a=collect(0:24-1)
b=exp(2*pi*im*a)

This is very, very slow ~10s after compilation. I tried to make an explicit loop and attempt that and only saw negligible improvement:

function imexp_1d(x::Array)
out::Array{Complex{Float64}}=Array(Complex{Float64},size(x))
for i in eachindex(x)
out[i]=cos(x[i]*2.0*pi)+sin(x[i]*2.0*pi)*im
#out[i]=complex(cos(x[i]*2.0*pi),sin(x[i]*2.0*pi))
end
return out
end
tt=collect(Float64,0:2^24-1)
@time imexp_1d(tt);
@time imexp_1d(tt);

The output is still ~9s. Am I missing something? Any help would be appreciated.

function imexp{T}(x::AbstractArray{T})
out = similar(x,Complex{T})
imexp!(out,x)
out
end
function imexp!(out,x::AbstractArray)
for i in eachindex(x)
out[i]=cos(x[i]*2pi)+sin(x[i]*2pi)*im
end
end
tt=collect(Float64,0:2^24-1)
@time imexp(tt);
@time imexp(tt);

Thanks! Yea i guess it must be the multithreading. I just tried your modified code with 4 threads going and i got ~2.5s timing which is much better. I thought it might be the multithreading thing but when I looked it up Matlab’s docs said that they only used multhreading automatically for LinAlg type operations.

Also, is there a particular reason for using the imexp!() separately? Does it have to do with type-stability?

That’s an in-place operation. If your output array is already allocated, it can be quicker depending on the size of the array, and also it can put less of a load in the garbage collector if you’re using this in a loop.