CSV read performance vs Pandas

Any idea why Pandas’ CSV parser is so awesome? Below is my test for a 20MB file with 100,000 rows x 12 columns, mostly Float64’s.

julia> @benchmark CSV.read("/tmp/test.csv")
BenchmarkTools.Trial: 
  memory estimate:  163.82 MiB
  allocs estimate:  9108142
  --------------
  minimum time:     1.226 s (22.79% GC)
  median time:      1.350 s (29.04% GC)
  mean time:        1.347 s (28.63% GC)
  maximum time:     1.464 s (32.77% GC)
  --------------
  samples:          4
  evals/sample:     1

julia> @benchmark Pandas.read_csv("/tmp/test.csv")
BenchmarkTools.Trial: 
  memory estimate:  10.89 KiB
  allocs estimate:  83
  --------------
  minimum time:     429.263 ms (0.00% GC)
  median time:      439.544 ms (0.00% GC)
  mean time:        440.220 ms (0.00% GC)
  maximum time:     451.383 ms (0.00% GC)
  --------------
  samples:          12
  evals/sample:     1

shell> ls -l /tmp/test.csv
-rw-r--r--  1 tomkwong  wheel  21168794 Mar  8 22:06 /tmp/test.csv

shell> wc /tmp/test.csv
  100000  100000 21168794 /tmp/test.csv

R’s data.table is also fast

1 Like

I can’t reproduce your timings… for a 100,000 x 12 table of random floats, I get 70ms with Julia and 129ms with Pandas.

Performance does suffer a bit with mixed types. In t2.csv, I changed the first column to ints (notice the huge increase in allocations), but still nothing like your timings. What types are you using?

shell> wc -l /tmp/t.csv
100001 /tmp/t.csv

shell> head /tmp/t.csv
k,m,i,l,p,b,e,o,d,n,a,c
0.2409627,0.4039211,0.1651225,0.5667682,0.5894277,0.9943912,0.547174,0.8407955,0.4460115,0.3069141,0.6926464,0.8103599
0.9240403,0.7549936,0.6487547,0.8099625,0.1726365,0.8733207,0.6790217,0.4219462,0.3477867,0.8507886,0.8026299,0.2950769
0.4712006,0.439265,0.4611259,0.4271396,0.02038192,0.8205912,0.8553605,0.02757602,0.2683213,0.8179719,0.7291129,0.06494521
0.996349,0.2351459,0.5336153,0.369024,0.1737951,0.779296,0.694431,0.5706579,0.3687971,0.7410107,0.9468887,0.9853139
0.8812785,0.06228102,0.4772179,0.4879081,0.5711414,0.7249674,0.7723914,0.3086208,0.9367701,0.3911373,0.7716293,0.8062436
0.2830349,0.2336157,0.03236917,0.375143,0.4423652,0.4979272,0.6796134,0.2883468,0.5686293,0.04900234,0.6169183,0.02231071
0.464235,0.4393972,0.7299044,0.7271473,0.644167,0.1910563,0.9823838,0.01284925,0.7959115,0.5527272,0.08613271,0.1721825
0.404164,0.01794849,0.8456904,0.9365643,0.3868644,0.4041997,0.8453569,0.136697,0.8901002,0.500644,0.1406727,0.3415088
0.6361285,0.6320285,0.6464563,0.8733505,0.04576213,0.6102426,0.3786439,0.2345142,0.2210209,0.5040782,0.5065185,0.2678539

julia> @benchmark CSV.read("/tmp/t.csv")
BenchmarkTools.Trial:
  memory estimate:  12.31 MiB
  allocs estimate:  196
  --------------
  minimum time:     68.950 ms (0.00% GC)
  median time:      69.678 ms (0.70% GC)
  mean time:        70.810 ms (2.27% GC)
  maximum time:     118.171 ms (40.81% GC)
  --------------
  samples:          71
  evals/sample:     1

shell> wc -l /tmp/t2.csv
100001 /tmp/t2.csv

shell> head /tmp/t2.csv
k,m,i,l,p,b,e,o,d,n,a,c
5,0.4039211,0.1651225,0.5667682,0.5894277,0.9943912,0.547174,0.8407955,0.4460115,0.3069141,0.6926464,0.8103599
3,0.7549936,0.6487547,0.8099625,0.1726365,0.8733207,0.6790217,0.4219462,0.3477867,0.8507886,0.8026299,0.2950769
6,0.439265,0.4611259,0.4271396,0.02038192,0.8205912,0.8553605,0.02757602,0.2683213,0.8179719,0.7291129,0.06494521
7,0.2351459,0.5336153,0.369024,0.1737951,0.779296,0.694431,0.5706579,0.3687971,0.7410107,0.9468887,0.9853139
0,0.06228102,0.4772179,0.4879081,0.5711414,0.7249674,0.7723914,0.3086208,0.9367701,0.3911373,0.7716293,0.8062436
2,0.2336157,0.03236917,0.375143,0.4423652,0.4979272,0.6796134,0.2883468,0.5686293,0.04900234,0.6169183,0.02231071
2,0.4393972,0.7299044,0.7271473,0.644167,0.1910563,0.9823838,0.01284925,0.7959115,0.5527272,0.08613271,0.1721825
3,0.01794849,0.8456904,0.9365643,0.3868644,0.4041997,0.8453569,0.136697,0.8901002,0.500644,0.1406727,0.3415088
9,0.6320285,0.6464563,0.8733505,0.04576213,0.6102426,0.3786439,0.2345142,0.2210209,0.5040782,0.5065185,0.2678539

julia> @benchmark CSV.read("/tmp/t2.csv")
BenchmarkTools.Trial:
  memory estimate:  47.32 MiB
  allocs estimate:  2294086
  --------------
  minimum time:     95.924 ms (1.56% GC)
  median time:      96.489 ms (1.58% GC)
  mean time:        98.969 ms (3.59% GC)
  maximum time:     154.921 ms (33.72% GC)
  --------------
  samples:          51
  evals/sample:     1

julia> @benchmark Pandas.read_csv("/tmp/t.csv")
BenchmarkTools.Trial:
  memory estimate:  11.36 KiB
  allocs estimate:  97
  --------------
  minimum time:     126.454 ms (0.00% GC)
  median time:      129.265 ms (0.00% GC)
  mean time:        129.198 ms (0.00% GC)
  maximum time:     131.443 ms (0.00% GC)
  --------------
  samples:          39
  evals/sample:     1

julia> @benchmark Pandas.read_csv("/tmp/t2.csv")
BenchmarkTools.Trial:
  memory estimate:  11.36 KiB
  allocs estimate:  97
  --------------
  minimum time:     120.393 ms (0.00% GC)
  median time:      123.140 ms (0.00% GC)
  mean time:        122.649 ms (0.00% GC)
  maximum time:     124.729 ms (0.00% GC)
  --------------
  samples:          41
  evals/sample:     1


julia> versioninfo(verbose=true)
Julia Version 1.2.0-DEV.unknown
Commit 60457de* (2019-03-08 17:26 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  uname: Linux 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 unknown
  CPU: Intel(R) Xeon(R) CPU E3-1220 v5 @ 3.00GHz:
              speed         user         nice          sys         idle          irq
       #1  3500 MHz    3792225 s       9640 s      88120 s  373573962 s          0 s
       #2  3474 MHz    1353493 s      12106 s      65979 s  377578307 s          0 s
       #3  3500 MHz    1468507 s      13422 s      65814 s  377466150 s          0 s
       #4  3480 MHz    1371725 s      13179 s      68439 s  377578218 s          0 s

  Memory: 62.74964904785156 GB (27196.83203125 MB free)
  Uptime: 3.790843e6 sec
  Load Avg:  0.0  0.0322265625  0.0029296875
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.1 (ORCJIT, skylake)
Environment:
  HOME = /home/me
  TERM = screen-256color
  PATH = /home/me/.cargo/bin:/home/me/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl

I’m using a somewhat outdated MacBook Pro. Haven’t tried Linux yet. But if you compare with Pandas then you should see the difference at your end.

Here is how I generated the test data. Basically 10 floats, a 5-char string and a 10-char string surrounded by double quotes.

julia> n = 100000; open("/tmp/test.csv", "w") do f
         for i in 1:n
           println(f, join(vcat(rand(10), randstring(5), '"'*randstring(10)*'"'), ","))
         end
       end

Julia’s CSV seems to be nearly 3x slower at parsing floating point numbers with 17 vs 16 digits after the decimal point. I’d file a bug.

julia> n = 100000;
       open("/tmp/test.csv", "w") do f
           for i in 1:n
               println(f, join(vcat(round.(rand(10), digits=16)), ","))
           end
       end

julia> @benchmark CSV.read("/tmp/test.csv")
BenchmarkTools.Trial:
  memory estimate:  32.50 MiB
  allocs estimate:  1305362
  --------------
  minimum time:     126.785 ms (0.00% GC)
  median time:      152.950 ms (13.80% GC)
  mean time:        162.647 ms (15.15% GC)
  maximum time:     230.151 ms (36.65% GC)
  --------------
  samples:          31
  evals/sample:     1

julia> n = 100000;
       open("/tmp/test.csv", "w") do f
           for i in 1:n
               println(f, join(vcat(round.(rand(10), digits=17)), ","))
           end
       end

julia> @benchmark CSV.read("/tmp/test.csv")
BenchmarkTools.Trial:
  memory estimate:  109.93 MiB
  allocs estimate:  5998616
  --------------
  minimum time:     273.352 ms (0.00% GC)
  median time:      417.435 ms (22.99% GC)
  mean time:        441.242 ms (27.28% GC)
  maximum time:     767.849 ms (48.71% GC)
  --------------
  samples:          12
  evals/sample:     1

16 digits is Float64 accuracy. That sounds suspicious…

I have a pretty comprehensive CSV benchmark code here. I ran it last in December and got these numbers.

I would expect that the Float64 with many digits issue in CSV.jl (which you can also see very clearly in my benchmark results: there is one file uniform floats and one with short floats, that refers to the number of digits) should be better with this PR that was just merged but hasn’t made it through METADATA yet.

I’m rerunning my benchmark right now with that PR in Parsers.jl to see how things stack up at this point. But running them takes a while, will probably be another day or so until it is done.

2 Likes

Yep, huge improvement with Parsers#master!
Over a 3x speed improvement and gets rid of most of the allocations with 17 digits:

julia> n = 100000;
       open("/tmp/test.csv", "w") do f
           for i in 1:n
               println(f, join(vcat(round.(rand(10), digits=17)), ","))
           end
       end

julia> @benchmark CSV.read("/tmp/test.csv")
BenchmarkTools.Trial:
  memory estimate:  10.59 MiB
  allocs estimate:  175
  --------------
  minimum time:     135.110 ms (0.00% GC)
  median time:      135.647 ms (0.00% GC)
  mean time:        138.449 ms (1.98% GC)
  maximum time:     183.970 ms (25.83% GC)
  --------------
  samples:          37
  evals/sample:     1
3 Likes

Alright, I reran my benchmark suite, and results are here. There are still some things about that benchmark results page that I want to improve before I announce it more widely, so please do take a peak and give feedback, but maybe don’t yet promote it around because I wouldn’t want folks to cite these numbers yet, before I’ve added a few more things.

In particular, these numbers are for a case where the CSV reader has read a file with exactly the same schema (column #, names and types) before this benchmark is run. That makes the julia packages look artificially good, because all JIT compiling is excluded from these runs (and in particular, JIT compiling that is schema specific).

I have the results for runs where that isn’t the case, just need to figure out how to present them in this figure without making it entirely unwieldy. My current thinking is to add a drop down box so that one can interactively select that, if anyone wants to help out, the way to do that would be to modify the vega-lite spec that powers this here. The easiest workflow for that is to click on the three dots in the top right of the figure on the benchmark page, select “Open in Vega editor”, and then edit the spec there until it does what it should do.

6 Likes

Am I right in thinking that Julia will never match the performance of other optimised CSV readers, because of the need for compilation? TextParse looks pretty fast without compilation but compilation will always be needed, because every CSV file is different and different columns have different types and therefore requires different code to be generated.

Basically, I am starting to form the view that reading from CSV is one area where Julia can’t solve the two language problem, if we care about performance, because of the need for compilation.

More happy to be wrong on this!

TableReader.jl is comparable or often faster than pandas in many real-workd data sets I’ve benchmarked, including the start-up time required for JIT compiling. Here is a quick benchmarking result. The first file (parking-citations.csv) is a 1.2 GB CSV with 19 columns and mixed types including integers, floats, strings, datetimes, missing values. The second file (GSM2230757_human1_umifm_counts.csv) is almost an integer matrix except the first few string columns. This file has more columns than rows (1,938 rows x 20,128 columns) and thus other Julia packages are not good at handling it (see Loading data with a lot of columns takes a long time at the first call · Issue #388 · JuliaData/CSV.jl · GitHub). In both files, TableReader.jl is faster than pandas by obvious margin. Of couse, it is written in pure Julia and thus the two-language problem is clearly avoided :wink:

pandas:

~/w/TableReader (master|…) $ ipython
^PPython 3.7.2 (v3.7.2:9a3ffc0492, Dec 24 2018, 02:44:43)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.2.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import pandas

In [2]: %time df = pandas.read_csv("data/parking-citations.csv", low_memory=False)
CPU times: user 33.2 s, sys: 4.31 s, total: 37.5 s
Wall time: 37.5 s

In [3]: %time df = pandas.read_csv("data/GSM2230757_human1_umifm_counts.csv")
CPU times: user 19.2 s, sys: 863 ms, total: 20 s
Wall time: 20 s

TableReader.jl:

~/w/TableReader (master|…) $ julia
               _
   _       _ _(_)_     |  Documentation: https://docs.julialang.org
  (_)     | (_) (_)    |
   _ _   _| |_  __ _   |  Type "?" for help, "]?" for Pkg help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 1.1.0 (2019-01-21)
 _/ |\__'_|_|_|\__'_|  |  Official https://julialang.org/ release
|__/                   |

julia> using TableReader

julia> @time df = readcsv("data/parking-citations.csv", chunksize = 0);
 23.972916 seconds (59.34 M allocations: 5.699 GiB, 12.75% gc time)

julia> @time df = readcsv("data/GSM2230757_human1_umifm_counts.csv");
  4.353241 seconds (3.50 M allocations: 1.054 GiB, 55.04% gc time)
17 Likes

I also have a plan to get rid of the Schema specific compile stuff for TextParse.jl.

Given you have access to the data can you try R’s data.table?

install.packages("data.table")
system.time(data.table::fread("path/to/file.csv"))

Here is a result:

~/w/TableReader (master|…) $ R

R version 3.5.2 (2018-12-20) -- "Eggshell Igloo"
Copyright (C) 2018 The R Foundation for Statistical Computing
Platform: x86_64-apple-darwin15.6.0 (64-bit)

...

> library(data.table)
data.table 1.12.0  Latest news: r-datatable.com
> system.time(data.table::fread("data/parking-citations.csv"))
|--------------------------------------------------|
|==================================================|
|--------------------------------------------------|
|==================================================|
   user  system elapsed
163.558   4.204  26.779
> system.time(data.table::fread("data/GSM2230757_human1_umifm_counts.csv"))
   user  system elapsed
  1.048   0.151   1.403
1 Like

Oh wow looks like TableReader is very competitve. Hopefully you can fix the ‘|’ delimiter issue so i can test it on the Fannie Mae data.

Submitted a bug report. It doesn’t work on the Fannie Mae data yet.

@bicycle1885 how did you make TableReader so much faster than all other Julia readers? Can give some high level strategies you have followed?

1 Like

I’m going to officially announce TableReader.jl here with some design concepts once it is registered in METADATA (https://github.com/JuliaLang/METADATA.jl/pull/22346). But briefly speaking, I think horizontal tokenization using a finite state automaton and vertical parsing using a string cache etc. make TableReader.jl faster than other packages. These strategies are chosen in my experience of making fast text parsers in the field of bioinformatics (for example, https://github.com/BioJulia/Automa.jl is my product). Also, modest specialization of methods reduces the start-up delay especially when there are a lot of columns in the data file. https://github.com/bicycle1885/TranscodingStreams.jl makes this practical.

11 Likes

Are there any strategies that can further improve the the performance when multithreading comes around? I learned that expertise really matters.

This feels weird. I am not the approach. Clearly the approach is fantastic, but the fact that we need to make this compromise. Of course full specialization will result in faster code, but the compilation will slow it down! This problem seems unique to Julia.

Theoretically, if I build a process to read CSV files in the same format every month (with new data of course), then I would go for the fully compiled approach. In fact I would try to save the compiled code, especially the code is dispatched on all column types. But it feels like for interactive use, more modest specialization is sometimes better. To balance this trade-off in practice increases mental load when developing packages. No solution from me, just some observations that I think are interesting.