I want to do this type of computation:

```
ot=collect(range(start=-3,length=3600*4,stop=3));
st=rand(1000000);
sgf=zeros(ComplexF64, length(ot));
@elapsed @time for nn in 1:length(ot)
sgf[nn]=sum(1 ./ (ot[nn]+0.001*im.-st));
end
```

With output 368.634083 seconds, which is extremely slower than the counterpart in Matlab:

```
ot=linspace(-3,3,3600*4);
st=rand([1,1000000]);
tic
for nn=1:length(ot)
sgf(nn)=sum(1./(ot(nn)+i*0.0001-st));
end
toc
```

Matlab only spends 58 seconds on my laptop.

Then I realized that Julia spends too much time on calculation inside summation:

```
@elapsed 1 ./ ((ot[1]+0.001*im).-st)
0.0234201
```

and if using `for`

:

```
@elapsed collect(ComplexF64,1.0/((ot[1]+0.001*im)-st[i]) for i in 1:length(st))
0.2392825
```

It is even more slower.

Therefore, what is the reason causing Julia computing vectorized list operation in such a low speed? I think there should be some optimizing metods.

Edit:

By substituting `1/c`

to `inv`

, summation over a complex list part becomes much faster:

```
@elapsed @inbounds @fastmath sum(cinv.(ot[1]+0.001*im.-st))
0.0039946
```

It is even faster than in Matlab. However, the final result in `for`

is still slow:

```
@elapsed for nn in 1:length(ot)
@inbounds @fastmath sgf[nn]=sum(inv.(ot[nn]+0.001*im.-st));
end
81.828844
```

I guess there still should be some aspects could be optimized.