I was just going to reply that this probably won’t be possible, and asking for your use case (30GB implies quite a lot of rows and/or columns, which if sparse are probably just a pain to handle outside the SparseArrays infrastructure?) but then I thought ah well this is Julia after all so why wouldn’t two packages just work ?
You can think of a DataFrame as a collection of columns, with each column being a (named) standard Julia vector, so constructing a DataFrame from the columns of a SparseArray:
julia> using DataFrames, SparseArrays
julia> I = [1, 4, 3, 10_000_000]; J = [4, 7, 200_000, 9]; V = [1, 2, -5, 6];
julia> S = sparse(I, J, V);
julia> df = DataFrame(["x$i" => c for (i, c) ∈ enumerate(eachcol(S))]...)
10000000×200000 DataFrame
(...)
and just to show that I’m not just using a machine with 2TB of RAM:
This is exactly the way to do it. The only cost is that some functions defined in DataFrames.jl will not work correctly as they assume that you can resize the vectors.