Int[]
is basically the same as Array{Int}(undef, 0)
, i.e. a Vector
of Int
s of size 0. Since this vector needs less memory than Array{Int}(undef, 5)
, evaluating Int[]
cannot be slower than evaluating Array{Int}(undef, 5)
.
Evaluating [1, 2, 3, 4, 5]
is one step further than just evaluating Array{Int32}(undef, 5)
: it also initializes the array elements. Therefore, evaluating [1, 2, 3, 4, 5]
cannot be faster than evaluating Array{Int}(undef, 5)
.
You can verify this yourself:
julia> using BenchmarkTools
julia> @btime Int[]
17.100 ns (1 allocation: 80 bytes)
Int64[]
julia> @btime Array{Int}(undef, 5)
18.218 ns (1 allocation: 128 bytes)
5-element Array{Int64,1}:
341787760
341783024
341755024
177957920
4934
julia> @btime [1, 2, 3, 4, 5]
19.959 ns (1 allocation: 128 bytes)
5-element Array{Int64,1}:
1
2
3
4
5
Note that the allocated memory changes from Int[]
to Array{Int}(undef, 5)
, but it doesn’t from Array{Int}(undef, 5)
to [1, 2, 3, 4, 5]
. Also, note that it may be the case that Int === Int32
on your computer, but this isn’t always true. As you can see, Int === Int64
on my machine.
If you write type-stable code, you usually don’t have to worry about type inference, because it occurs during compilation (type inference generally occurs at compile-time, not run-time).
In your examples, all arrays are one-dimensional (they are vectors). If you are talking about the number of elements, you are specifying the number of elements in each array (Julia can figure out that [1, 2, 3, 4, 5]
has five elements), so this isn’t a bottleneck in most of the cases.