I have `vecs::Vector{Vector{Float32}}`

(really, the columns of a dataframe), each of which has `length`

of 300000. I’m interested in the performance of `v[t]`

, where `t`

is a `BitVector`

. I’m finding that performance is strangely bimodal. Why?

```
# t = rand(Bool, length(vecs[1]))
p = plot(ylim=[0.0003,0.001], legend=false)
gc_enable(false)
for i in 1:10
plot!([@elapsed(v[t]) for v in vecs])
end
gc_enable(true)
p
```

If I leave the gc on, I get more jumps

In the above, `mean(t)`

is 0.75, but it has long stretches of `true`

and `false`

. If I use `t = rand(Bool, length(vecs[1]))`

, then the bimodality disappears.

Am I seeing memory hierarchy effects? GC generations?