For me SQLite (wrapped in Julia) handles ≈1.8e9 records in a single ≈400 GB file fine, definitely much faster than .csv. But I have not benchmarked yet against HDF5, which could be a notch faster regarding the raw reading.
What does “exploring” these 1e8 time series mean? If it is simulated data then presumably a large number of parameters with complex relations were varied for all the different time series? Then the SQL itself and the option to have multiple indices that can be changed in the course of the exploration could be handy. HDF5 support for indexing is limited, or requires extensions.