Choosing a numerical programming language for economic research: Julia,

Hi all,

We wrote a new blog post about comparing Julia with MATLAB, Python, and R.
Comments and suggestions are welcome!

19 Likes

Thanks for the write-up.

I think it is useful to make a distinction between (re-)running an old established computation/model and creating a new one. In the former case even R can be performant (because someone wrote c code for it). In the latter case, Julia is likely to have a considerable advantage. This could have been clearer in the write-up.

As for your critique of the Julia documentation, you probably have a point in saying that something easier would be helpful to many end-users.

4 Likes

Agree, the real advantage of R is it has libraries for anything one could imagine, and these are fast, accurate and trusted. RCpp is super easy to work with when performance is needed.

Julia is much more performant, but for most applications that is irrelevant. What matters is ease of use and libraries. And that is where R beats Julia and I think will continue to.

2 Likes

Nice writeup, as usual there are many specific things one could debate but overall a balanced and reasonable summary.

The one thing that did surprise me though was this:

Julia has dependency management facilities that work pretty well but are poorly documented and hard to use.

Could you explain what you are referring to here?

2 Likes

Thank you for taking the time to read our post and sharing your valuable advice.

Thank you for your comment. Dependency management refers to how one can run specific versions of Julia (like 1.7.2) with specific versions of libraries (like 2.3,8) known to generate particular output. Julia has this functionality (called virtual environments in Python) but I cannot make sense of the documentation.

3 Likes

The writeup is neat and formatted well. Some of the criticisms of the languages are quite vague, but it’s generally in line with other language comparisons I’ve seen. This does seem intended to be concise and digestable rather than detailed.

You provided an appendix link that has the code and performance comparisons, but benchmark reports really should be more detailed. I couldn’t even guess the test inputs’ types if it weren’t for the statically typed C code; there’s no report of the absolute timings; different languages have their own best practices for repeatable benchmarking (which may report more things than timing, like allocations), and comparing benchmarking methods of different languages needs explicit justification.

3 Likes

I agree with this - the Pkg docs are extremely dense and hard to follow. Beyond a basic “] activate and ] add” workflow, the docs are not really too helpful (especially in regards to ] test environments).

7 Likes

Thanks for the kind words on the writeup. We were indeed aiming for a general audience, not specialists. There are a lot of comparisons for technical audiences, not so many for typical uses.
I don’t understand your criticism of the appendix. It is the calculation of a particular mathematical function where everything is floating-point. What is the problem?
We could have given absolute timings, but I think most people care about relative performance. Besides, absolute timings are specific to the computer, so even less relevant. We ran this multiple times with the benchmarking tools available in each of the languages when available. What is wrong with that?
I think this is how most people benchmark in real life.

Could you elaborate on the point about documentation? I agree with you to some degree, but would like to know if we’re thinking of the same things.

Julia’s documentation … by and large would profit from focusing on practical uses of code instead of computer science arcana.

This is about the Julia manual, right? I don’t see this too much in the docstrings (though they have their own issues), but the manual does have a habit of glossing over the 90% use case quickly, and then spending a lot of space on the edge cases and underlying theoretical concerns. It would benefit from putting the typical use cases first, with several examples, and reserving the advanced use cases and the naunced minutiae of each section to their own separate subsection.

I haven’t looked at the provided source code from the zip, but I think they’re referring to the precise incantations used - there are some subtle problems that can be introduced by faulty benchmarking practices after all.

Yes, the Julia manual. It focuses almost exclusively on theory, leaving users who are not experts in theory in the dark. Agree it would be much better to put use cases at the start of the documentation. Suspect it would do wonders for Julia’s popularity too.

I used to be a long-time Matlab user. In 2018 I decided to migrate to a new language, so I looked at documents like the one mentioned above to make a balanced choice. After experimenting with Python and Julia (I knew a little bit of R), eventually, I chose to stick with Julia. This kind of publication is crucial for people like me (users) because they help to formulate an expectation about the future of alternative languages. We expect them to be unbiased, based on serious scrutiny of the various aspects that are relevant to people when choosing a computing language, and not subject to random factors that may affect the production of such documents.

I have been aware of the two previous versions of Jon Danielsson and associates’ evaluations of “Julia, Matlab, Python or R”, in 2018 and 2020. I must confess that I significantly considered them when I made my choices. Having passed four years since the release of Julia 1.0 and without any further dramatic change in any of the four languages (as far as I can see), I can now observe a common fact in all three versions of the published document. The main conclusion seems to be highly dependent on Danielsson’s co-author in all three versions. For example, in the 2020 version, Danielsson and Aguirre conclude:

“In conclusion, Julia is generally the fastest and requires the least amount of tricky coding to run fast. Any of the others could be the second best, depending on the application and the skill of the programmer […] As a consequence, Julia is the language we now tend to pick for new projects and generally recommend.”

But in the 2022 version, Danielsson and Lin conclude that:

“R is the best overall language [and Julia] is by far the fastest of the four. Its weakness is library support and documentation. We recommend Julia for those writing their own code to solve complex, time-consuming problems.”

What has changed since 2020 so R has become the “best overall language” and Julia’s “library support and documentation” has become so problematic?

11 Likes

I asked Jon what changed and he told me that he and some colleagues had started a large project with Julia and ran into difficulties with both the lack of libraries and documentation. In particular, he said a lot of Julia data pipeline libraries are more immature than he had anticipated. So he had become a little bit less enthusiastic about Julia.

3 Likes

Great writing, on this I agree:

Not only is it very expensive, it is slow and has the worst library support. 

Julia is great, I know from this article that R has the best backward compatibility, Perhaps R+Julia can perform better.

I have a project doing that. Its all julia, except embedded R for regressions (there is no julia analog to R’s dynlm() I think). It works really well, but I am (irrationally) bothered by the same file switching from julia to R and back.

1 Like

If you want to share the project here, I would love to read and learn it.

From a quick look at the package it seems it’s just a wrapper around the builtin lm in R which offers some additional formula syntax like lag and diff - should be doable by extending StatsModels @formula, see Dave Kleinschmidt’s talk at this year’s JuliaCon

1 Like

Just a wrapper doesn’t quite do it justice, even though it it just a wrapper! It is quite well thought out, and does things in a way which is natural to someone who knows regressions, while lm() is much lower level and needs a lot of coding that is taken care of by dynlm(). Stated differently, one can write regressions in dynlm() similar to how one sees them in math. The developers of dynln() did an excellent job.

1 Like

I meant that the appendix should explicitly describe the benchmarking code and report raw results, in addition to the concluding relative timings. I actually overlooked that you provided a download link for your benchmarking code, and now that I take a non-exhaustive glance, there are a few things I can comment on:

  • Your garch.py includes a np.var call in between the tic and toc timings made by timeit.default_timer, but your garch_numba.py does the np.var call before the benchmark loop. You need to keep all the little things consistent too, I also noticed how the likelihood functions may have a lik = 0 or lik = 0.0 depending on the language.

  • Your garch.jl and garch_inbounds.jl both benchmark a function call with global variables, which generally hampers performance. If you dig further in the @timed results, you’ll notice that there’s an unnecessary 1 allocation of 16 bytes, which is an artifact of the global variables. In practical situations, the inputs will be local variables, and benchmarking such a call will show the expected 0 allocations and probably improved timing.

Click to expand example of difference between global variables and local variables in benchmarking your `likelihood` method (I made up some inputs for simplicity).
julia> o = 1.2; a = 3.4; b = 0.25; h = 1.3; y2 = rand(1000); N = length(y2)
1000

julia> @timed likelihood(o, a, b, h, y2, N)
(value = 1464.433023543925, time = 1.6412e-5, bytes = 16, gctime = 0.0, gcstats = Base.GC_Diff(16, 0, 0, 1, 0, 0, 0, 0, 0))

julia> let
       o = 1.2; a = 3.4; b = 0.25; h = 1.3; y2 = rand(1000); N = length(y2)
       @timed likelihood(o, a, b, h, y2, N)
       end
(value = 1473.07078801908, time = 1.0601e-5, bytes = 0, gctime = 0.0, gcstats = Base.GC_Diff(0, 0, 0, 0, 0, 0, 0, 0, 0))
  • Consider using BenchmarkTools.jl to profile Julia instead of looping @timed, like how you use the microbenchmark package for R. There’s some reading to do, some mistakes to avoid, and the interpolation syntax will look odd at first, but the design and features definitely beats implementing your own benchmarking code.
2 Likes