Measurements Indicate that this is “Faster than the Speed of Light”

I created a variable of type BitSet and used setdiff! to remove elements from it one at a time until it was empty. This took linear time as a function of the set size and the constant was about 10 cycles per element (on an x64 machine). This makes since. If the set were implemented as a bit array, deletions could be fast. But it implies that isempty is not scanning the bit array to decide if the set is empty. It must be consulting a hidden variable that keeps track of the number of elements in the set. Because length works on BiSet types (though I can’t find this in the documentation) the existence of such a hidden variable seems natural.

But then it occurred to me that pop! must be O(n^2). If a set is mostly empty after popping an element it might be necessary to scan a large portion of the set to find the next element to pop.

But still trying to figure out simple things in Julia, I timed alternating between removing a random element using setdiff! and popping using pop! . It is linear, with a constant of 13 or 14 cycles per element. This is too fast. It violates theory for the assumed data structure.

The implication is that the underlying data structure must be significantly richer than assumed. There must be something like a hidden queue. The queue structure must point to elements in the bit array and there must be an index for finding elements in the queue when they are removed from the set, so they can also be removed from the queue.

Is this true ???

If so it is both cool and disturbing. It is fast! But I was in the process of creating the auxiliary structures required to make it O(n) or O(n ln(n)). These structures now seem redundant with existing (likely faster) hidden structures.

I am wandering:

  • What was the direct way to figure this out?
  • Am I doing something similarly wrong with other data structures?
  • Is this a serious memory problem when I don’t need to push or pop?
  • Is it possible to add the hidden queue only when it is realized that pop will be used on this “object” … this seems likely to create new issues, but …

The plot is:
Set Speed Test

Should I upload the code?

Nope, definitely not. Here’s the full definition of the type, which you can find by doing @edit BitSet():

mutable struct BitSet <: AbstractSet{Int}
    # 1st stored Int equals 64*offset

    BitSet() = new(sizehint!(zeros(UInt64, 0), 4), NO_OFFSET)
1 Like

I think you did not understood the complexity of BitSet correctly. It is not polynomial on the number of elements stored in the set. It is polynomial in the largest number stored in the set. Each pop! is not O(n^2), it just scans the whole set and set to zero the first position that is one, this is O(N) (where N is the size of the set) and for a heavily populated set it will be often very fast (because the first non-zero is found soon), you meant a sequence of pop!s?

It is also possible that your BitArray size has not trashed cache yet, so the timings will be very good.

Yes, I was timing a sequence of pop! s. But you are correct that I’ve made a silly mistake in my analysis. Although not the one you’re suggesting.

I was assuming that a single pop! is O(n) and I was doing n of them. But the timing suggested pop! was O(1) (or perhaps O(ln(ln(n))).

Your response reveals that I’ve not been clear about my definitions:

  • Let n be the range of the iterator used to in the construction of the BitSet (i.e., the number of elements in the underlying Vector).

  • Let m be the number of elements in the mathematical set (i.e., the number of non-zero entries in the underlying Vector)

  • Let \ell be the range of elements in the mathematical set (i.e., the maximum value in the set minus the minimum value in the set (plus one)).

Then if there is a hidden variable that keeps track of the maximum value in the mathematical set (i.e., a kind of next element to pop variable) this can be updated during a pop in O(\ell/m) time.

In the code I timed, n is held constant as I drain the set, but the set is drained in such a way that m and \ell shrink together so that \ell/m is almost constant. So, the timing chart is consistent with the assumed data structure.

This is the solution to my issue. This is the way to check that the data structure is not more than I’ve assumed.