Compilation time not vanishing on subsequent runs

Just for fun, I put your original code in a function:

function testit()
    xs = range(1, 100, 100)
    ys = range(1, 100, 100)
    R(x, y) = (sqrt((x-51)^2 + (y-51)^2))
    autocorr_distances = R.(xs, ys')
    correlation = rand(100,100)
    radial_sum = zeros(50)

    @time begin
        for r in range(1, 50, 50) 
            indexes = findall(x->(x<(r+0.5) && x>(r-0.5)), autocorr_distances)
            sum = 0
            for index in indexes
                sum += correlation[index[1], index[2]]
            end
            radial_sum[trunc(Int,r)] = sum/length(indexes)
        end 
    end
end

I had to add initialization for correlation and radial_sum which weren’t provided in your code. With this function definition I get the following output:

julia> testit()
  0.072925 seconds (12.87 k allocations: 833.984 KiB, 99.13% compilation time)

julia> testit()
  0.000796 seconds (350 allocations: 412.312 KiB)

julia> testit()
  0.000629 seconds (350 allocations: 412.312 KiB)

Roughly the same as your results with Statistics.mean. The moral is to beware of nonconstant globals.

I would also recommend replacing the range(1,50,50) with the simpler 1:50 which, when iterated, takes integer values, so that radial_sum[trunc(Int,r)] can be replaced by radial_sum[r]. Also, your variable name sum is not ideal, since there is a built-in sum function in Julia. Finally, since correlation is presumably a matrix of floating-point values, you should initialize summ (new name) to 0.0 (or even better to zero(eltype(correlation))) to avoid type instability.

4 Likes