I don’t have gpu on hands for now, and I am curious that weather this function can be accelerated by applying Gpu calculation. I used to think gpu can only speed up function like + - * .... In my experience, this kind of function consume very high gc time, so can gpu reduce the gc time too?

function test_func(n::Int)
bit_set_vec = [BitSet() for i = 1 : n]
bit_set = BitSet([1,2,3])
for i = 1 : n
bit_set_i = bit_set_vec[i]
union!(bit_set_i,bit_set)
setdiff!(bit_set_i,bit_set)
end
return nothing
end

I think this example might be too minimal, since it doesn’t actually do any calculation, but in general, a Vector{Bitset} is unlikely to be the optimal data layout for your problem.

There is two part in this function consume much time. 1. bit_set_vec = [BitSet() for i = 1 : n], 2. union!(bit_set_i,bit_set) setdiff!(bit_set_i,bit_set), and it is the bottleneck in my code. I tried other data layout like Vector{Tuple}, it indeed reduced the gc time, but the union!, setdiff! on Tuple is much more inefficient than on BitSet, so I quit it.