[ANN] CSV.jl 0.7 Release

Thanks for the info and a pity for the change, which the user now has to do again. A little comfort for the user can only be helpful…!
This is of course only my limited view as a Data Analyst: In most cases the user will read the CSV file and store it in a DataFrame. So far I have found it very pleasant that Julia behaves like R at this point:

# R:
> Data <- read.csv2("mtcars.csv")
> str(Data)
'data.frame':   32 obs. of  12 variables:
 $ X   : chr  "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
 $ mpg : num  21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
 $ cyl : int  6 6 4 6 8 6 8 4 4 6 ...
# Julia:
dateiname = "mtcars.csv"
Daten = CSV.read(dateiname, header = 1, delim = ';', ...... )
show(Daten, allcols = true)

There are a lot of benefits to not requiring DataFrames in CSV. All you have to do now is call DataFrame after CSV.File, which is not a very big change.

# Julia:
dateiname = "mtcars.csv"
Daten = CSV.File(datainame, header = 1, delim = ';') |> DataFrame

Besides, I find myself calling as_tibble() all the time in R, which is quite similar.

1 Like

Yes of course, but that is not the point (from my point of view). This small - and possibly helpful from a technical point of view - change makes it necessary to change the code and documentation used. And I’m sure some users will wonder if this was really necessary.

A particular example, where the dependency on DataFrames was a problem, was when DataFrames 0.21 came out, which had a few breaking changes. CSV had a new release, which updated DataFrames to this version, but since that was not a breaking change in CSV, this was tagged as 0.6.2, meaning full compatibility with all previous 0.6 versions. Since people were unknowingly relying on the DataFrames API though, this broke some of their code. With the dependency on DataFrames made explicit now, it will be much easier just to restrict DataFrames to a specific version and CSV doesn’t have to worry about breakage in DataFrames

I’d just like to point out that such decisions are usually not made haphazardly and a lot of thought goes into making as little breaking changes as possible. For packages to evolve though, sometimes breaking changes are needed to provide a better experience in the end.


Seems like a lot of stuff is broken. Does dropmissing! not work anymore? When reading a CSV dat = DataFrame!(CSV.File("./Leinhardt.csv")) I have as column types…

4-element Array{Type,1}:
 Union{Missing, Float64}

Then I drop my missing values dropmissing!(dat, disallowmissing=true) where the kwarg suggests that column types will be T instead of Union{T, Missing}. But this dosn’t work anymore. Running eltypes(dat) still returns

4-element Array{Type,1}:
 Union{Missing, Float64}

How can I make the second column of type T?


GLM.jl is breaking too, but not sure if its because of CSV changes.

## define a linear model
lmod = lm(@formula(infant ~ logincome), dat)
MethodError: no method matching fit(::Type{LinearModel}, ::Array{Float64,2}, ::Array{Float64,2}, ::Bool)

Maybe its a problem with the new datatype

> dat.infant
101-element SentinelArrays.SentinelArray{Float64,1,Float64,Missing,Array{Float64,1}}:

That might be due to the SentinelVectors CSV now uses under the hood. You should be able to do DataFrame(CSV.File(...)) (w/o the !), to get back the old behavior of making a copy.

That does not work either. I’ve tried using CSV.read as well and no dice.

There seems to be a bug in similar(::SentinelArray, T, ...): it returns a SentinelArray with eltype Union{T, Missing}.


Thanks so much for your work on this. Just FYI, I fall into the category of data analysts who does not use DataFrames, and I think decoupling the two makes perfect sense. If users want to read directly to DataFrames, it makes sense to me that the function that does that should be in DataFrames, not CSV.


Not quite; the whole uncompressed data indeed has to live in memory, but there’s no duplicate made for CSV.File; only the output column arrays are allocated in addition to the file buffer. The ideal solution is to unzip the input to an uncompressed file on disk, then pass the uncompressed filename to CSV.File, which will mmap the input buffer; this lessens overall memory pressure since the OS can swap mmapped pages in as needed while parsing.

The overhead is not really performance, just memory; you basically just allocate a single contiguous array, then copyto! all the chains. The overhead of using ChainedVector is surprisingly not bad; in my tests it was rarely more than a 2x hit on indexing, which would very quickly get swamped by other computational costs in whatever processing you’re doing. I also have some ideas sketched out to make common operations multithreaded over the chains; it makes a lot of sense because you get the ChainedVectors from a multithreaded csv parsing scenario, which means the data is largish, and they’re naturally split into thread-friendly chains. I think it’ll be a fun/interesting experiment to see what kind of performance boosts we can get over regular Arrays.

The other thought here is that as you get up to really large datasets, you have to switch over at some point to processing data in “batches”, so this is hopefully a way to ease the transition up, along with the new CSV.Chunks functionality.

The other reason for this change is to hopefully encourage people to use CSV.File directly if the full DataFrame functionality isn’t needed. There are a lot of workflows that just need to load data into column arrays, do some quick statistical processing on a few columns, then output the results somewhere. This can easily be achieved by just operating on CSV.File columns directly instead of needing to even use DataFrames at all. A huge advantage of Julia is we don’t have to only have a single, one-size-fits-all solution to these kinds of problems (e.g. “you have to load everything into a DataFrame”).

Thanks for the report @affans, this has been fixed in a patch release to SentinelArrays.jl. Not sure what the GLM.jl issue is you reported, but feel free to open an issue there and we can look into it (from first glance, it doesn’t seem related to CSV.jl changes).


So for e.g. pointer arrays the extra memory wouldn’t really be a problem since the pointer array is much smaller than the sum of the content of the array?

In what way are the chains more thread friendly than a normal array? You have the separate chains because they need to grow while they are being created in the parser but once they are done, having a bunch of separately allocated chains seems worse than just a big array (that you can of course “chunk” by dividing up the indices)?

Thanks again for linking to that PR. It was a very interesting exercise to work through it in detail.

The lesson I learned from it for general code is that manually “unrolling” scenarios for limited types with an if ... elseif ... which is ideally type stable within branches can be very efficient. Very neat.

Also, I looked at the code for

and found it very instructive. Specifically, when I experimented with constructs like this I always ran into the problem of choosing sentinels when something like NaN is not available. Just randomizing that choice and changing it on demand is very elegant.


You need to be a bit careful when it comes to thread safety for this (since it is a shared “global” value in the array.

1 Like

Yes, as noted in the docs, mutating operations (i.e. setindex!) are not thread-safe in the case where you might run into sentinel collisions, which is only really a problem for <: Integer eltypes. It’s a pretty annoying corner-case, but I haven’t seen an easy way around it. It’s another reason I’m hopeful we can move away from SentinelArrays.jl eventually.


I would advise against using

DataFrame!(CSV.File(file; kw...))

if you plan on doing any mutating of your data frame. It’s safer to use

DataFrame(CSV.File(file; kw...))

Otherwise you’ll run into problems like this:

"df.csv" file:



julia> using DataFrames, CSV

julia> df = DataFrame!(CSV.File("df.csv"))
2×2 DataFrame
│ Row │ x      │ y     │
│     │ String │ Int64 │
│ 1   │ a      │ 1     │
│ 2   │ b      │ 2     │

julia> filter!(r -> r.x == "b", df)
ERROR: MethodError: no method matching deleteat!(::CSV.Column{String,String}, ::Array{Int64,1})
Closest candidates are:
  deleteat!(::Array{T,1} where T, ::AbstractArray{T,1} where T) at array.jl:1275
  deleteat!(::Array{T,1} where T, ::Any) at array.jl:1274
  deleteat!(::BitArray{1}, ::Any) at bitarray.jl:962
 [1] (::DataFrames.var"#161#162"{Array{Int64,1}})(::CSV.Column{String,String}) at /Users/bieganek/.julia/packages/DataFrames/htZzm/src/dataframe/dataframe.jl:873
 [2] foreach(::DataFrames.var"#161#162"{Array{Int64,1}}, ::Array{AbstractArray{T,1} where T,1}) at ./abstractarray.jl:1919
 [3] delete!(::DataFrame, ::Array{Int64,1}) at /Users/bieganek/.julia/packages/DataFrames/htZzm/src/dataframe/dataframe.jl:873
 [4] _filter!_helper(::DataFrame, ::Function, ::DataFrames.DataFrameRows{DataFrame,DataFrames.Index}) at /Users/bieganek/.julia/packages/DataFrames/htZzm/src/abstractdataframe/abstractdataframe.jl:1075
 [5] filter!(::Function, ::DataFrame) at /Users/bieganek/.julia/packages/DataFrames/htZzm/src/abstractdataframe/abstractdataframe.jl:1056
 [6] top-level scope at REPL[3]:1

My actual code was more complicated, and I was getting an inscrutable UndefRefError:

ERROR: UndefRefError: access to undefined reference

so I wasted hours trying to figure out what the real problem was.

The UndefRefError that I got with my real code appears to be related to a SentinelArray column that was in the data frame that I got from DataFrame!(CSV.File(file)).

It looks like you’re still using a past (~0.6) CSV.jl release; in the latest 0.7 release, CSV.Column doesn’t exist. Not sure why you suspect issues with SentinelArrays.jl since it doesn’t show up in your backtraces at all? There have been a few SentinelArrays.jl issues since the 0.7 release, but they’ve been fixed pretty quickly as people report them (i.e. open an issue at the SentinelArrays.jl repo).


Ah, shoot. I fell back to by base environment when making the MWE. My real code ran in an environment that uses CSV v0.7.4. Here’s the stacktrace from my real code:

ERROR: UndefRefError: access to undefined reference
 [1] getindex at ./array.jl:787 [inlined]
 [2] index at /Users/bieganek/.julia/packages/SentinelArrays/zmvEI/src/chainedvector.jl:33 [inlined]
 [3] deleteat! at /Users/bieganek/.julia/packages/SentinelArrays/zmvEI/src/chainedvector.jl:144 [inlined]
 [4] deleteat!(::SentinelArrays.SentinelArray{String,1,UndefInitializer,Missing,SentinelArrays.ChainedVector{String,Array{String,1}}}, ::Int64) at /Users/bieganek/.julia/packages/SentinelArrays/zmvEI/src/SentinelArrays.jl:279
 [5] deleteat!(::SentinelArrays.SentinelArray{String,1,UndefInitializer,Missing,SentinelArrays.ChainedVector{String,Array{String,1}}}, ::Array{Int64,1}) at /Users/bieganek/.julia/packages/SentinelArrays/zmvEI/src/SentinelArrays.jl:291
 [6] (::DataFrames.var"#161#162"{Array{Int64,1}})(::SentinelArrays.SentinelArray{String,1,UndefInitializer,Missing,SentinelArrays.ChainedVector{String,Array{String,1}}}) at /Users/bieganek/.julia/packages/DataFrames/htZzm/src/dataframe/dataframe.jl:873
 [7] foreach(::DataFrames.var"#161#162"{Array{Int64,1}}, ::Array{AbstractArray{T,1} where T,1}) at ./abstractarray.jl:1919
 [8] delete!(::DataFrame, ::Array{Int64,1}) at /Users/bieganek/.julia/packages/DataFrames/htZzm/src/dataframe/dataframe.jl:873
 [9] _filter!_helper(::DataFrame, ::Function, ::DataFrames.DataFrameRows{DataFrame,DataFrames.Index}) at /Users/bieganek/.julia/packages/DataFrames/htZzm/src/abstractdataframe/abstractdataframe.jl:1075
 [10] filter! at /Users/bieganek/.julia/packages/DataFrames/htZzm/src/abstractdataframe/abstractdataframe.jl:1056 [inlined]
 [11] process_child_file(::StateChildFile, ::String, ::Dict{Tuple{String,String,String,Date},Int64}) at /Users/bieganek/projects/NCANDS/NCANDS.jl/src/preprocess.jl:393
 [12] top-level scope at REPL[20]:1

I think this bug was fixed in the 1.2.8 release of SentinelArrays.jl; can you check your version?


Ah, yes, thanks! It turns out my environment actually had CSV v0.7.3, and thus SentinelArrays v1.2.7. Upgrading CSV to v0.7.4 fixed the problem. Thanks!

I thought my environment was up-to-date, but this serves as a reminder to run ] update just to be sure before submitting bug reports…

1 Like