I think you are measuring compile time:
D:\Temp>julia test2.jl 10000
BenchmarkTools.Trial: 23 samples with 1 evaluation.
Range (min … max): 129.072 ms … 331.770 ms ┊ GC (min … max): 0.00% … 36.07%
Time (median): 223.699 ms ┊ GC (median): 39.80%
Time (mean ± σ): 220.838 ms ± 62.594 ms ┊ GC (mean ± σ): 28.47% ± 23.30%
▃ █ █ ▃
▇█▁▁▁▁▇▁▁▁▁▁█▁▇▁▁▁▁▇▇▇▁▁▁▁▁▁▇▁▁▁▁▇▇█▇▁▁▁▁▁▁▁▁▁▇▇▁▁▁▁▁▁▁▁▁▁▇▁█ ▁
129 ms Histogram: frequency by time 332 ms <
Memory estimate: 762.94 MiB, allocs estimate: 2.
D:\Temp>julia test1.jl Float64 10000
BenchmarkTools.Trial: 23 samples with 1 evaluation.
Range (min … max): 123.081 ms … 319.950 ms ┊ GC (min … max): 0.00% … 43.44%
Time (median): 203.366 ms ┊ GC (median): 32.62%
Time (mean ± σ): 217.781 ms ± 63.084 ms ┊ GC (mean ± σ): 28.12% ± 23.13%
█ ▃
▇▁▁▇▇▇▁▁▁▁▇▁▁▇▁▇▁▁▁▇▇▇▇▁▇▁▁▁▁▁▁▇▁▁▁█▇▁▁▁▁▁▁▁▁▁▁▁▁▁▁▇▇▇▁▁▁▁▁█▇ ▁
123 ms Histogram: frequency by time 320 ms <
Memory estimate: 762.94 MiB, allocs estimate: 4.
with nearly same timings.
Julia 1.7.1 , THIS MAY BE IMPORTANT HERE!
with
test1.jl:
using BenchmarkTools
FloatA = eval(Meta.parse(ARGS[1]));
n=parse(Int,ARGS[2])
function init_a(n::Integer)::Matrix{FloatA}
a = zeros(FloatA, n, n)
#Initialize a
return a
end
b = @benchmark init_a(n)
io = IOBuffer()
show(io, "text/plain", b)
s = String(take!(io))
println(s)
and test2.jl:
using BenchmarkTools
#FloatA = eval(Meta.parse(ARGS[1]));
n=parse(Int,ARGS[1])
function init_a(n::Integer)::Matrix{Float64}
a = zeros(Float64, n, n)
#Initialize a
return a
end
b = @benchmark init_a(n)
io = IOBuffer()
show(io, "text/plain", b)
s = String(take!(io))
println(s)