What is faster: sparse vector or a dictionary?

Sparse vector uses binary search. Dictionary uses a hash. Anyone has an experience to share?

in general, most sparse matrix code can be written as iteration over the nonzero elements. for cases that can’t, dictionaries will usually be faster.

1 Like

Dictionary lookup should be faster than binary search of a sorted array once you have more than a few dozen elements.

Thanks. What I need is to read from “random” locations.

So, here is a MWE.

module m

using InteractiveUtils

function _binary_search(array::Array{IT,1}, target::IT, left::IT, right::IT) where {IT}
    @inbounds while left <= right # Generating the middle element position 
        mid = fld((left + right), 2) # If element > mid, then it can only be present in right subarray
        if array[mid] < target
            left = mid + 1 # If element < mid, then it can only be present in left subarray 
        elseif array[mid] > target
            right = mid - 1 # If element is present at the middle itself 
        else # == 
            return mid
        end
    end
    return 0
end

N = 5103
r = collect(1:N)
g = collect(N:-1:1)
NLOOP = 1000000
d = Dict{Int, Int}(zip(r, r))

function test_bsearch(N, r, g)
    for l in 1:NLOOP
        for R in Int.(round.([0.13, 0.39, 0.49, 0.61, 0.77, 0.98] * N))
            v = do_bsearch(N, r, g, R)
            @assert v == g[R]
        end
    end
    nothing
end

function do_bsearch(N, r, g, R)
    #@code_warntype _binary_search(r, R, 1, N)
    k = _binary_search(r, R, 1, N)
    return g[k]
end

function test_dict(N, d, g)
    for l in 1:NLOOP
        for R in Int.(round.([0.13, 0.39, 0.49, 0.61, 0.77, 0.98] * N))
            v = do_dict(N, d, g, R)
            @assert v == g[R]
        end
    end
    nothing
end

function do_dict(N, d, g, R)
    k = d[R]
    return g[k]
end

using BenchmarkTools

@btime test_bsearch(N, r, g)
@btime test_bsearch($N, $r, $g)

@btime test_dict(N, d, g)
@btime test_dict($N, $d, $g)

end  # module

It seems to be saying that the dictionary is (in this case) nearly twice as fast.
However, I see oodles of allocations, from both implementations, and I can’t figure out where it comes from.
The macro “code warn type” says the code is clean.

I don’t see what is wrong there. Does something pop out for you?

I think your binary search is non-allocating.

julia> const v = [1,5,8,9,10];

julia> const target, left, right = 8, 1, 5;

julia> @allocated _binary_search(v, 8, 1, 5)
0

You could be getting allocations from test code like this:

for R in Int.(round.([0.13, 0.39, 0.49, 0.61, 0.77, 0.98] * N))

where you allocate an array by writing it down and then allocate more arrays to hold the results from broadcasting round and Int.

1 Like

OMG, I’ve been totally blind.

Actually tried to fix it, no luck!

module m

using InteractiveUtils

function _binary_search(array::Array{IT,1}, target::IT, left::IT, right::IT) where {IT}
    @inbounds while left <= right # Generating the middle element position 
        mid = fld((left + right), 2) # If element > mid, then it can only be present in right subarray
        if array[mid] < target
            left = mid + 1 # If element < mid, then it can only be present in left subarray 
        elseif array[mid] > target
            right = mid - 1 # If element is present at the middle itself 
        else # == 
            return mid
        end
    end
    return 0
end

N = 5103
r = collect(1:N)
g = collect(N:-1:1)
NLOOP = 1000000
d = Dict{Int, Int}(zip(r, r))
Rs = Int.(round.([0.13, 0.39, 0.49, 0.61, 0.77, 0.98] * N))

function test_bsearch(N, r, g, Rs)
    for l in 1:NLOOP
        for R in Rs
            v = do_bsearch(N, r, g, R)
            @assert v == g[R]
        end
    end
    nothing
end

function do_bsearch(N, r, g, R)
    k = _binary_search(r, R, 1, N)
    return g[k]
end

function test_dict(N, d, g, Rs)
    for l in 1:NLOOP
        for R in Rs
            v = do_dict(N, d, g, R)
            @assert v == g[R]
        end  
    end
    nothing
end

function do_dict(N, d, g, R)
    k = d[R]
    return g[k]
end

using BenchmarkTools

@btime test_bsearch(N, r, g, $Rs)
@btime test_bsearch($N, $r, $g, $Rs)

@btime test_dict(N, d, g, $Rs)
@btime test_dict($N, $d, $g, $Rs)

end  # module

The above still allocates.

WARNING: replacing module m.
  282.951 ms (2998979 allocations: 61.02 MiB)
  280.877 ms (2998979 allocations: 61.02 MiB)
  94.330 ms (2998979 allocations: 61.02 MiB)
  95.013 ms (2998979 allocations: 61.02 MiB)
Main.m

const NLOOP = 1000000

yields

  157.137 ms (0 allocations: 0 bytes)
  157.251 ms (0 allocations: 0 bytes)
  36.294 ms (0 allocations: 0 bytes)
  37.151 ms (0 allocations: 0 bytes)
Main.m
1 Like

So, it boxes the l? Or why does it allocate?

The compiler does not know what type NLOOP is, so it does not know the type of 1:NLOOP, consequently it does not know what type l is, nor how long the loop is. For what the compiler knows, 1:NLOOP can be anything, and it has to allocate room for every l coming out of it to see if it’s nothing or some (value, state) tuple (like it does for every iterator). Even if you don’t use l in the loop, the iterator 1:NLOOP is used as described in Interfaces · The Julia Language.

1 Like

What if I never defined the name (i.e. used _)? Edit: That wouldn’t have helped at all.

From this simple example, why, I had some hopes that using a dictionary instead of a sparse data structure (vector) I could get a speedup. Alas, the opposite was found: the binary search is at least 3x as fast as the dictionary. Not sure why. Perhaps the memory fragmentation of the dynamic structures?

doesn’t it show the opposite? The numbers reported by you are 280ms for binary search, 95 ms for dictionaries; by Chris Elrod, 157ms for binary search, 36ms for dictionaries. FWIW I get

  157.283 ms (2998979 allocations: 61.02 MiB)
  157.035 ms (2998979 allocations: 61.02 MiB)
  55.314 ms (2998979 allocations: 61.02 MiB)
  55.730 ms (2998979 allocations: 61.02 MiB)

for the non-const NLOOP, and

  127.844 ms (0 allocations: 0 bytes)
  127.847 ms (0 allocations: 0 bytes)
  21.291 ms (0 allocations: 0 bytes)
  21.545 ms (0 allocations: 0 bytes)

for the const one, also seeing much faster perf with dictionaries.

He was saying, this example made the dict look faster making him hopeful to speed up his real use case, but when testing that he found that binary search was actually faster there.

2 Likes