For an algorithm I need to generate x = [1,2,...,n] for a large n and I can’t do x = 1:n as I need to reassign the values in x so I can do something like x[10] = x[2].

Currently I am doing x=collect(1:n) and I tested x =[1:n...] and they are both similar in speed. But am wondering if those are the fastest way already?

Because your question on SO I think you probably want to have array with 2 billion elements. And that you like some concrete problem to solve.

I am just curious. If you cannot use something like array(n) = n instead of array = [1:2^31...] then you probably need to change these values.

But are you planning shrinking this array with deleteat! or something similar?

Maybe you could set array by init values: array = zeros(2^31) (which has to be quick) and use something like getvalue(index) = array[index] == 0 ? index : array[index] if you don’t need to look at values too often.

Don’t you want describe your problem in gist or blog (or here if it is not too long) to help us understand what to optimize?

It’s actually something different to SO post now. Basically I am trying to
create an algorithm to sort a vector and produce its sortperm at the same
time. To start the algo I need an index vector from 1:n which I then
permute.

Anyway I think collect(v) is pretty fast but just wanted to make sure

using BenchmarkTools
function genn(n)
x = zeros(Int,n)
x .= 1:n
x
end
@benchmark genn(100_000_000)
function genn1(n)
collect(1:n)
end
@benchmark genn1(100_000_000)

using Base.Threads
function fcollect(N,T=Int)
nt = nthreads()
n,r = divrem(N,nt)
a = Vector{T}(N)
@threads for i=1:nt
ioff = (i-1)*n
nn = ifelse(i == nt, n+r, n)
@inbounds for j=1:nn
a[ioff+j] = ioff+j
end
end
a
end

This is twice as fast for T=Int32 as for T=Int64, and both are substantially faster than collect(1:N) with several threads. But it’s still a factor of 2 under the memory bandwidth limit (on my system, at least) so the field is still open.

In the future, it would be nice to do this threading automatically at least in simple cases. Should be doable once threading is more solid and turned on by default.