What package[s] are state-of-the art OR attract you to Julia, and make you stay there (not easily replicateable in e.g. Python, R, MATLAB)?

You tend to see Julia described as “immature” “but developing very fast”.

I believe “immature” is just not correct, when applied to the language, nor it seems to the package ecosystem.

I think Julia may just need more publicity of what is possible. With JuliaPro with “over 100+” packages it seems it should be enough, except for that “one package” missing…).

Out of curiosity approximately how many packages to you use on a regular basis (excluding your own not yet published) and what are the major ones? I’m especially curious about those not replicated elsewhere.

People using e.g. Python can claim 10.000+ packages but who really has overview of that ecosystem (besides you can reuse all of them with Julia).

My shortlist of candidates, is JuMP was the first state-of-the art package, then

https://github.com/JuliaDiffEq/DifferentialEquations.jl

JuliaDB may be another, or on its way there, and the data packages/missing-value, just can’t put my finger on if it just one/few packages or the ecosystem as a whole.

More? Flux.jl? Some of the ML packages I hear are really good.

We also have some really obscure packages for e.g. file formats (not sure really if by now much is missing) and picking something at random:

https://github.com/matteoacrossi/ExpmV.jl

I’m not thinking of such packages that are clones of MATLAB, unless say missing in Python.

As we can claim being able to use all Python (or R) packages, Python (and R) users can claim the converse with pyjulia. Is there a good counterargument for that? Why do you still choose Julia over others? It it more about the language itself as the better default (not just for speed) or about the packages and for some reason would be inferior to use with pyjulia?

7 Likes

I think JuMP and the DiffEq system are both state-of-the-art, I just wish I could use them more often (or, like, at all) in my work!

A lot of the stuff that I use in other languages is still making its way into Julia, but the potential capabilities of the packages once they are in Julia are much greater. For example, Dynare is a pretty developed package for Matlab, and it has some ports to Julia, but they are still mostly alpha quality. I think Julia is still a new and growing ecosystem, so the potential for massive improvements are there in just about every corner.

1 Like

Definitely JuMP. The Python equivalents are a joke.

I think some of the simple stuff is really underrated. I can’t express to you how strongly I prefer Julia DataFrames over pandas. They are so lightweight in simple, it’s so easy to work on them just using functions from Base. As I’ve said elsewhere, for the most part the only really specialized functions I use for DataFrames are by and join and that’s really all you should ever need for a dataframe implementation, because there is no reason why all the generic language stuff (e.g. Base in Julia) should not be completely adequate for the vast majority of what you’d want to do in the language. DataFrames are pretty simple things, there is no reason for there to be reames and reames of documentation about them. (To be clear, the problem is not with pandas, it’s a fine package, the problem is with Python.)

I haven’t even gotten to the real reason to always use Julia. (In the end, it’s not about the packages, it’s about the language.)

4 Likes

Using Python as glue code and through pyjulia is fine. If all of your time is spent in package calls then there’s no speed difference. The issue is developing packages in Python. It’s not just a code acceleration thing, the whole package development ecosystem is kind of wonky with tox environments etc. diffeqpy had the help of some experienced Python package devs and still took about a week to get tests running, so that’s what it’s like. Julia (and R) on the other hand have very easy integrations with CI and code coverage libraries, along with people actually double checking package submissions to meet standards and helps you through fixing issues (no one checks PyPi, anything goes there).

But no reason to not use Python to use other people’s packages since then your code will just be as fast as the package code if all your time is in package code calls. With one big exception: it will run into performance issues with any package that requires function inputs, like differential equation, optimization, etc. libraries. In a recent blog post we showed that Numba+SciPy integrators are still 10x slower than it should be, so it’s not the easy solution some people make it out to be. The reason of course is that Numba doesn’t do interprocedural optimizations and cannot optimize through Python code, so if there’s any Python in the entire stack then the whole thing can get more context switching costs, so even if you have a faster function call you still have a limiting factor. If you have an asymtopically costly function then this could be mitigated, but that means you do have a lot of limitations on standard use cases.

This is why some Python optimization and differential equation libraries actually take in strings or SymPy expressions input (PyDSTool is one of many examples), since direct Numba/Cython usage for function input will have these problems so they have to write a compiler behind the scenes, which then of course means you have the restrictions that it has to be Float64 without arbitrary numbers and etc. etc. etc. You can see how this not only lowers features but also adds to development complexity. In fact, this is clearly seen by how PyTorch has a full JIT compiler inside of it along with a full math library! So developing PyTorch was more like developing Julia and the ML library at the same time, instead of just building an ML library. And then it doesn’t get the automatic composibility features of Flux.jl so you have to build everything in there yourself… you can see how the package development differences are quite astronomical.

Back to Packages

So okay, that rant was done because it’s informative. That’s where are competitive difference is and it’s reflected in the state-of-the-artness of packages. A lot of data science, statistics, and ML libraries just take a matrix of data in and spit out predictions. In that sense, all of the time is spent in package code and while Julia does have a competitive advantage for building such packages, it doesn’t have an advantage for using such packages. This is why data science and ML have done just fine in Python, and statistics has done just fine in R. It would be hard to find real Julia gems in these areas except in things which are still brand new due to someone’s recent research. I’d point to Josh Day’s OnlineStats.jl as an example of that.

One exception is with Bayesian estimation and MCMC. For this area of data science you do need to take in full models. R and Python have resorted to taking Stan models in the form of a string. This is ugly! You’re writing in another language (Stan) and in a string (without syntax highlighting) in order to do Bayesian estimation? Julia has libraries that I have found to be amazing, like Turing.jl and DynamicHMC.jl. These libraries let you insert your likelihood function as just standard Julia code, making it much easier to make complicated models. For example, Stan only lets you use its two differential equation solvers (with like 5 options :smile:) and only for ODEs, but Turing.jl and DynamicHMC.jl both can internally utilize DifferentialEquations.jl code for ODEs, SDEs, DAEs, DDEs, etc.

And neural net libraries are not as blackbox as most of ML. To get around the issues I mentioned earlier, you cannot just pass functions in so you have to tell the neural net library how to build the entire neural net. This is why PyTorch and Tensorflow are essentially languages embedded into Python which produce objects that represent the functions you want to write, which pass the functions to these packages to compile the right code. Obviously this would be hard to write on your own, but the main disadvantage to users is that normal user code doesn’t work inside of these packages (you cannot just call to arbitrary SciPy code for example). But Flux.jl has full composability with Julia code so it’s definitely state-of-the-art and has the flexibility that other neural net libraries are trying to build towards (Tensorflow is rebuilding into Swift in order to get this feature of Flux.jl…)

But where things tend to be wide open is scientific computing. This domain has to deal with functions and abstract types. So I’d point to not just DifferentialEquations.jl and JuMP, but also a lot of the tooling around that area. IterativeSolvers.jl is a great example of something heavily unique. At face value, IterativeSolvers.jl is a lot like Pysparse’s iterative solvers module or the part of SciPy. Except it’s not. These both require a NumPy matrix. They specifically require a dense/sparse matrix in each documented function. However, one of the big uses of these algorithms is not supposed to be on sparse matrices but on matrix-free representations of A*x, but you cannot do this with these Python packages because they require a real matrix. In IterativeSolvers you just pass a type which has A_mul_B! (*) overloaded to be the function call you want and you have a matrix-free iterative solver algorithm. Complex numbers, arbtirary precision, etc. are all supported here which is important for ill-conditioned and quantum uses. Using abstract types you can also presumably throw GPUArrays in there and have it just run. This library only keeps getting better.

One of the best scientific computing packages is BandedMatrices.jl. It lets you directly define banded matrices and then overloads things like * and \ to use the right parts of BLAS. Compared to just using a sparse matrix (the standard MATLAB/Python way), this is SCREAMING FAST (the QR factorization difference is pretty big). Banded matrices show up everywhere in scientific computing (pretty much every PDE has a banded matrix operator). If it’s not a banded matrix, then you probably have a block-banded matrix, and thank god that @dlfivefifty also wrote BlockBandedMatrices.jl which will utilize the same speed tricks but expanded to block banded matrices (so, discretizations of systems of PDEs).

In fact, I would say that @dlfivefifty is an unsung hero in the backend of Julia’s scientific computing ecosystem. ApproxFun.jl is another example of something truly unique. While MATLAB has chebfun, ApproxFun is similar but creates lazy matrix-free operators (so you can send these linear operators directly to things like, you guessed it, IterativeSolvers.jl). It uses InfiniteMatrix (InfiniteArrays.jl) types to grow and do whatever sized calculation you need without full reconstructions of matrices.

While those may look like niche topics if you don’t know about those, those are the types of libraries which people need in order to build packages which need fast linear algebra, which is essentially anything data science or scientific computing. So Julia’s competitive advantage in package building is compounded by the fact that there’s now great unique fundamental numerical tools!

Along the lines of these “fundamental tools” packages is Distributions.jl. It’s quite a boring package: it’s just a bunch of distributions with overloads, but I think Josh Day’s discussion of how you can write a code which is independent of distributions is truly noteworthy.

If you want to write code to compute quantiles in R or Python, you’d need to use a different function for each possible distribution, or take in a function that computes the pdf (which of course then gives you the problem of function input in these languages!). Package developers thus far have just hard coded the distributions they need into things like SciPy, but it’s clear what the scalability of that is.

And then I’d end with JuMP, DifferentialEquations.jl, and LightGraphs.jl as very user-facing scientific computing packages which are in very important domains but don’t really have a comparison library in another scripting language. JuMP’s advantage is that it allows you to control many things like branch-and-cut algorithms directly from JuMP, something which isn’t possible with “alternatives” like Pyomo, so if you’re deep into a difficult problem it’s really the only choice. DifferentialEquations.jl is one of the few libraries with high order symplectic integrators, one of the few libraries with integrators for stiff delay differential equations, one of the two high order adaptive stochastic differential equation libraries (with the other being a re-implementation of my first paper on it), one of the only libraries with exponential integrators, one of the only libraries with … I can keep going, and of course it does well in first-order ODE benchmarks. So if you’re serious about solving differential equation based models then in many cases it’s hard to even find an appropriate alternative. And LightGraphs is in a truly different level performance wise than what came before, and it only keeps getting better. Using things like MetaGraphs lets you throw metadata in there as well. Its abstract structure lets you write and use graph algorithms independent of the graph implementation, and lets you swap implementations depending on what would be efficient for your application.

So if you’re serious about technical computing, these are three libraries to take seriously.

These packages have allowed for many domain specific tools to be built. Physics is one domain which is doing really well in Julia, with DynamicalSystems.jl and QuantumOptics.jl being real top notch packages (I don’t know of a DynamicalSystems.jl comparison in any other language, and I don’t know anything that does all of the quantum stochasticity etc. of QO).

There’s still more to come of course. Compilation tools like PackageCompiler.jl and targetting asm.js is only in its infancy. JuliaDB is JuliaDB is shaping up to be a more flexible version of dask, and it’s almost there. We’re missing good parallel linear algebra libraries llike a native PETSc, but of course that’s the kind of thing that can never be written in Python (Julia is competing against C++/Fortran/Chapel here).

But I hope this is more of a “lead a :horse: to water” kind of post. There are a lot of unique things in Julia, and I hope this explains why certain domains have been able to pull much further ahead than others. Most of the technical computing population is still doing data science on Float64 arrays with blackboxed algorithms (TSne, XGBoost, GLM, etc.) and in that drives a lot of the “immature” talk because what they see in Julia is similar or just a recreation of what exists elsewhere. But in more numerical analysis and scientific computing domains where higher precision matters (“condition number” is common knowledge), functions are input, and there are tons of unique algorithms in the literature that still have no implementation, I would even say that Julia surpassed MATLAB/Python/R awhile ago since it’s hard to find comparable packages to ours in these other languages.

If we want to capture the last flag, we really need to figure out what kind of competitive advantage Julia has in the area of data science / ML package development, because until it’s unique it’ll just be “recreations” which don’t draw the biggest user audience. While “programmers” will like the purity of one language and the ability to develop their own code, I think the completeness of statistical R or ML Python matters more to this group. I think a focus on big data via streaming algorithms + out-of-core is a good strategy, but it’s not something that cannot necessarily be done in C++ (dask), so there’s something more that’s required here.

76 Likes

Awesome post, thanks for this! I’d like to humbly request that you consider posting this on your blog, I’d like to use it to proselytize people, and it would just be slightly nicer and ultimately more visible if it were on your blog instead of only here :slightly_smiling_face:.

As I’ve indicated, the simplicity of the Julia packages was more than enough to persuade me even in those areas alone, but I think much of this I owe to my physics background. I don’t want anybody to tell me not to write a loop, I don’t care what the application is! Admittedly it’s been much harder to persuade people who are less comfortable with the idea of “numerical computing” per se.

9 Likes

For most people (casual users), it is about the packages, however, I see your “real reason” as one of the many in Julia - it’s kind of like “The Field”, Jeff et al. built it, and we have come :wink: making packages right and left.

3 Likes

Just personal opinion: I see this “religious” commentaries as not very helpful to spread Julia.

2 Likes

How is this that? I’m referring to an issue that I found obnoxious when using Python. The kind of C++ code I was writing previous to that, typically did not involve specialized API calls for everything I ever needed to do. It’s purely a practical consideration. Granted it’s not something that’s vital, it’s just an annoyance which is rectified in Julia.

I deleted my bolds, how’s that? Better tone?

You are saying what other person will ever need. It is some kind of (false) prophecy.

One could probably need mature import from csv and pandas will help her better (yet).

Edit: If you change word “you” (should ever need) with word “me” I think it could be really fine :slight_smile:

Sorry, I probably should not have been quite so literal in my wording, however my understanding of the topic is basically this, perhaps I’m missing some major aspect of these things?

That’s really a separate thing, but the CSV support is pretty good in Julia right now.

(By the way I was of course making a joke when I said “proselytize people” in my response to @ChrisRackauckas, which I had thought was obvious.)

Bio.jl is also very nice (nicer than R’s equivalents imo, which is the standard in the field):

https://github.com/BioJulia/Bio.jl

4 Likes

That’s a great example too since it uses bit operations on its own sequence types to make them fast and take up less memory. This is very different from string operations or char arrays since these are lower bit representations with specialized operations, all built in the comfort of Julia. Here’s a snippet and you can find more:

https://biojulia.net/Bio.jl/latest/man/seq/sequences/bioseq/#Using-a-more-compact-sequence-representation-1

4 Likes

Although I am Julia “positive” too - it is still more promising than really useful for my work.

But I really like Keno’s Cxx.jl. C++ REPL is really admirable! :smiley:

1 Like

I am not sure if solving the two-language problem by creating a new programming language like Julia is a better choice than by using a glue language like python or lua. I guess over time, all languages have to glue some other languages, because there will be more and more domain specific languages that are good at certain areas. Even old languages like C++ are getting better.

1 Like

I would be the set of Automatic Differences packages: ForwardDiff and ReverseDiff

I’m quite positive it is better, and the reason is all about programmer productivity.
I feel that writing code in Julia is much more productive than using languages like C, C++, or Go, since I can get the same results with similar performance in much less code. Python and Lua are also very productive languages, but then, if there aren’t libraries already written that do what you want, and you hit a performance wall trying to implement it in Python, then you end up losing all of the productivity advantages you had against C/C++/Go, because you have to write the fast version of the code in C++ (trust me, not something very fun to do)

Also, no matter what some of the old languages do to make themselves better, they’ll still be stuck having to support tons of legacy code, making it very hard to really innovate, the way that Julia has done, starting with a clean slate.
(I was a part of the design/architecture of revamping MUMPS => Caché ObjectScript, which was similar in scope to the changes from Dartmouth BASIC and the original Microsoft Basic => Visual Basic).

4 Likes

In my field (biophysical modelling), my current prediction is that Unitful.jl is the killer package.

https://github.com/ajkeller34/Unitful.jl

That’s what makes people want to rewrite their Fortran. Everything has units and without unit tracking you need constant manual conversions throughout your code and interface wrappers between packages. And people mess it up way more often than you would hope, no one writes tests either. Most models avoid modularity because of the complexity (as well as a lot of other bad reasons). With Unitful they won’t have to, and it takes care of a lot of the potential errors. It’s a total game changer. Its been done before, and in Fortran too, but Unitful makes it easy and potentially pervasive, although with a few bumps to smooth out still.

After that I personally like DifferentialEquations.jl, but actually most models I read just cut and paste some 10 year old Runge-Kutta Fortran routine right into the codebase so I’m not sure they care that much in reality. The speed gains might be enticing, although they would happily cut and paste those too.

10 Likes

The work I’m doing just isn’t possible in my time-frames using C++/Fortran and a glue language. I can restructure 10000 line Fortran codebases in Julia (down to 2000 lines) and connect them to other models and outputs far quicker than I could connect and cleanup the original Fortran models and the glue.

They end up with units, way better solvers, unbelievably better FileIO and data manipulation (like 5 lines vs 1000), better python/R interop, and they are actually extensible. And still run as fast as the Fortran did anyway.

Also, in my field module interfaces can’t go through a glue language because they need to connect inside hot inner loops. All over the place. And while it’s possible to orchestrate that in R or Python for C/Fortran libraries, its not painless, so no-one ever does it. They just build huge monoliths, maybe with an R wrapper so its on CRAN.

6 Likes

@ChrisRackauckas, Great Writing!

I really like this thread as it really shows Julia’s prowess.
I would like to mention Knet.jl:

https://github.com/denizyuret/Knet.jl

Really well structured Low Level Deep Learning project (Which is not mentioned enough in the community).
I wish Flux.jl and Knet.jl were merged under one GitHub organization (JuliaDeepLerning Maybe?) with 2 projects just like JuliaOpt is the home of many optimization packages.
They can share a lot (Especially developer resources) yet still target different audience (Knet.jl for lower level inner working of the net and Flux.jl for faster prototyping).

Another worth package mentioning is Convex.jl:

https://github.com/JuliaOpt/Convex.jl

Which some user raised in the Slack group whether it is still under development (@madeleineudell Is it?).
As an heave CVX user in MATLAB I really like this one.
Yet I wish Variables had more structure options as in CVX Variable Structure and more Sets were defined as in CVX’s Sets.
By the way, both are missing the set of Circulant Matrix which its projection is pretty easy - Projection onto the Set of Circulant Matrices.

1 Like

This should be published on its own!

I think talking about packages is the best advertisement for Julia — but it’s invisible to someone taking a first look at the moment. At first look, Julia is a theoretically beautiful programming language. More than that, it’s a practically beautiful ecosystem, but it takes weeks of experience to realise that on your own.

It takes a new user a lot of time or searching to come across many of the great packages — and it’s not easy to find the advantages of a package written up, so it often needs noticing on your own, which means you have to start using it extensively first.

AND THEN all the packages combine and are greater than the sum of parts. It takes a lot more playing to find this out for yourself, so the people who find this out would only be those invested already.

An article like this should be on the blog or … home page? … somewhere with really high visibility!

6 Likes