Discussion on "Why I no longer recommend Julia" by Yuri Vishnevsky

I don’t think there’s any problem with the flourishing of similar packages. The problem is that none of them are flourishing (apart from Zygote and ForwardDiff) because none of them are getting enough attention.

And just as importantly, we can only learn from experimenting with new things if we get feedback on all these new packages from users. I picked the three of the above autodiff packages with the closest-to-median number of stars, setting aside Enzyme (which is still in development). They had a combined total of exactly 9 issues within the past year. When your number of issues is in the single-digits, I’m guessing you’re not getting enough feedback to learn much from users.


Then I guess as a common user, you can safely ignore them if you think they are not well maintained. As an experienced developer, however, you can always learn and find something inspiring in yet another XXX pacakge.

I think that’s sort of my point. Python might have as many tiny packages if I were to spend a lot of time looking for them, but that’s fine. Python has a lot of users, so it can afford to be divided, especially since the big packages that everyone uses still get a lot of attention. Another way of thinking of this is that if stars are roughly proportional to attention, something like JAX is getting 17 times more attention and maintenance than Zygote. If we go by contributors, JAX is getting about 4 times as much. In the past month, JAX has merged 172 pull requests, while Zygote has merged 6, and Enzyme has merged 45.


What he basically says is that common flows are polished in popular languages. Because so many eyes went over them and fixed all the bugs for many use-cases. And indeed such bugs are actively being addressed in Julia.

But try a more special use-case? You’ll see it’s very complicated or even impossible to do in Python and in C++ because they aren’t flexible enough.

In C++, you could access not just arrays with arbitrary indices but also just any memory wherever you would, including through unassigned pointers. This is the cause of countless bugs, crashes, and security breaches over the years.

In Python, you never know the type of variable data until you try to use them and then it either works correctly, works implicitly, or just doesn’t.

In Julia, differential programming (Zygote) is possible and still buggy in many cases. In other languages, it is impossible at all.

In Julia, custom/alternative indexing is possible and still buggy in many cases. In other languages, it is impossible at all.

With great power comes great responsibility, bugs, and the greater need for extensive unit testing – both in your code and in the library you use.


Thank you @ParadaCarleton for this comment, it definitely resonates with the experience of many of us. As you pointed out using a practical example in autodiff packages, I do share this point of view with you that our community is facing a social problem in which people prefer to start over a new experimental package instead of contributing to existing ones that are just fine. This issue is even worse in hyped communities (e.g. ML).

Now, let’s think about possible solutions:

  • What mechanisms can we implement to reduce this fragmentation?

  • Could a feature in JuliaHub help the community overcome this social problem and become more collaborative instead of competitive?

  • Would a “blessing” from “experienced” members of the various Julia organizations help? For example, can the autodiff maintainers sit together in a meeting and review the pros and cons of the different designs to come up with a united front that is widely promoted for most end-users? (Note: this is just an example, think more broadly and try to not get lost in autodiff specifics)

If we can start thinking actively about these social issues, we will be able to provide a more pleasant experience for newcomers. Below I share other examples of “competition” that I personally find difficult to grasp without diving deeper in the literature and source codes.

Reinforcement learning

  • POMDPs.jl (496 stars)
  • ReinforcementLearning.jl (398 stars)
  • AlphaZero.jl (1015 stars)

Krylov methods

  • IterativeSolvers.jl (320 stars)
  • Krylov.jl (211 stars)
  • KrylovKit.jl (161 stars)

Named arrays

  • AxisArrays.jl (166 stars)
  • NamedArrays.jl (93 stars)
  • DimensionalData.jl (159 stars)
  • AxisKeys.jl (120 stars)
  • NamedDims.jl (105 stars)
  • LabelledArrays.jl (94 stars)

Notice that all these packages are pure Julia :white_check_mark: and have a similar number of stars :white_check_mark: Whenever I find myself in this situation, I have a hard time picking one as a dependency in my projects. Imagine if all these maintainers were working together…


Well, sort of. But if I was trying to create a Windows simulation, and I had a choice between Visual Studio / C# and Julia, I wouldn’t necessarily pick Julia. I may well decide that Visual Studio was the best choice for building the user interface, and that C# would be probably fast enough. Both C# and Julia are both ‘general purpose languages’, but that wouldn’t help me, I want to pick the best tool from my toolkit. What is Julia best for?

Also, as per the book “Crossing the Chasm”, to succeed you need to pick a market segment, and dominate it. If Julia is ‘general purpose’ have you sufficiently defined your market segment?


They don’t do the same thing. LabelledArrays.jl for example is completely not comparable to AxisArrays.jl in how it works, how it scales, or what it does.


Unclear if the argument here is that Julia is too specialized or not specialized enough.


Thank you @ChrisRackauckas, I anticipated this type of answer. I think it illustrates the issue we are discussing very well. The situation is so fragmented that only the maintainers (or advanced members) can spot the exact differences between them. By just scanning the README files quickly we cannot tell that the packages are fundamentally different.

Someone else will probably point out the differences between the Krylov packages and someone else will probably point out the differences between the reinforcement learning packages, but that is not the point. :slight_smile:


What do you mean? LabelledArrays.jl doesn’t once mention an “axis” or have a concept of an “axis”, all of the tutorials are only small one dimensional vectors, while AxisArrays.jl shows a bunch of things on tutorials and labelling the dimensional axes. How are those kinds of libraries even close? LabelledArrays.jl and ComponentArrays.jl are close, but LabelledArrays.jl is as similar to Flux.jl as it is to AxisArrays.jl.


I mean that the fact I misinterpreted the purpose of LabelledArrays.jl is an indication that the situation could be improved. Many other packages in this umbrella also don’t mention “axis” in the README and yet they provide comparable functionality or discuss the concept of “labels in arrays”. The point is: we can dive into this specific distinction or we can look into the broader issue raised by @ParadaCarleton.

I hope we can improve the broader issue. Do you have suggestions of improvement or a solution to the fragmentation? Do you believe it is an issue?


@gideonsimpson Has some truth, Julia started as a dynamic language for technical computing, and TBH, this itself was a great goal, no other language till today excelled enough at this goal.

Excerpt from original Julia website, 2017:

MATLAB for example, with all its flaws and horrible design choices, entered the top 20, because it excelled at exactly what it was designed for. Too many general purpose languages out there, Python, Java, C, C++, C#, Go, Closure, etc., but nothing good enough for scientific computing. Julia could easily enter the top 10, if it gets polished enough at this area (technical computing, machine learning, …). No other language has been brave enough, or has been as close as Julia, to crack this area.


Exactly. Fortran was the scientific computing language of its era, but even with the inclusion of more modern features in Fortran 90 and later, it’s still mess with lousy syntax and legacy design. There’s a real gap for a modern, open source, language targeting scientific computing applications, as opposed to general purpose languages.


It was an issue. The solution, at least in the area of numerical and symbolic equation solving, is the SciML ecosystem and its common interface. There is a common interface documented across all numerical and symbolic equation handling here:


This gives a fast and uniform handling for:

  • Linear systems (LinearProblem)
    • Direct methods for dense and sparse
    • Iterative solvers with preconditioning
  • Nonlinear Systems (NonlinearProblem)
    • Systems of nonlinear equations
    • Scalar bracketing systems
  • Integrals (quadrature) (QuadratureProblem)
  • Differential Equations
    • Discrete equations (function maps, discrete stochastic (Gillespie/Markov)
      simulations) (DiscreteProblem)
    • Ordinary differential equations (ODEs) (ODEProblem)
    • Split and Partitioned ODEs (Symplectic integrators, IMEX Methods) (SplitODEProblem)
    • Stochastic ordinary differential equations (SODEs or SDEs) (SDEProblem)
    • Stochastic differential-algebraic equations (SDAEs) (SDEProblem with mass matrices)
    • Random differential equations (RODEs or RDEs) (RODEProblem)
    • Differential algebraic equations (DAEs) (DAEProblem and ODEProblem with mass matrices)
    • Delay differential equations (DDEs) (DDEProblem)
    • Neutral, retarded, and algebraic delay differential equations (NDDEs, RDDEs, and DDAEs)
    • Stochastic delay differential equations (SDDEs) (SDDEProblem)
    • Experimental support for stochastic neutral, retarded, and algebraic delay differential equations (SNDDEs, SRDDEs, and SDDAEs)
    • Mixed discrete and continuous equations (Hybrid Equations, Jump Diffusions) (DEProblems with callbacks)
  • Optimization (OptimizationProblem)
    • Nonlinear (constrained) optimization
  • (Stochastic/Delay/Differential-Algebraic) Partial Differential Equations (PDESystem)
    • Finite difference and finite volume methods
    • Interfaces to finite element methods
    • Physics-Informed Neural Networks (PINNs)
    • Integro-Differential Equations
    • Fractional Differential Equations
  • Data-driven modeling
    • Discrete-time data-driven dynamical systems (DiscreteDataDrivenProblem)
    • Continuous-time data-driven dynamical systems (ContinuousDataDrivenProblem)
    • Symbolic regression (DirectDataDrivenProblem)
  • Uncertainty quantification and expected values (ExpectationProblem)

It’s not all completed yet, I’d say some of the periphery libraries (like Quadrature.jl) still have a few more months before we really call them ready for prime time. But yes, there is a fragmentation issue and the common interface is the solution to that. (And there’s still a few places we need to go, like interpolation).

Because of that, I would say we should stop recommending things in the “fragmented universe”. People shouldn’t use QuadGK.jl directly, or Optim.jl directly, or NLSolve.jl directly. They all have different interfaces, different keyword argument conventions, etc. If you stick to GalacticOptim.jl, NonlinearSolve.jl, Quadrature.jl, then you’ll have more features, more solvers available, and you’ll have uniformity across all of the different problems (and any case where there’s a uniformity break, please open an issue and we can get that corrected).

I’ll stay out of data science because this is already a giant domain to tackle. Data Science needs to similarly consolidate interfaces and have someone just completely overhaul it.


Python’s success in the scientific computing space is significantly bolstered by its general-purpose utility. Many folks don’t want to have to reach for a second language to handle basic tasks that fall outside the purview of languages like MATLAB or R.

You see this talking point come up a lot in Python vs R discussions, including from folks who think R is a better language but say you should use Python anyway because you can use it for pretty much anything.

I, for one, would be much less happy with a hypothetical version of Julia that sticks to scientific computing and lacks general-purpose functionality.


There have been discussions about interfacing julia and C# :slight_smile: but it received little attention so far, except for GitHub - ShuhuaGao/JuliaCSharp: Julia and C# interoperation and GitHub - HyperSphereStudio/JULIAdotNET. See e.g. Interoperability with .NET for more info if you want.


On named dimension packages: we explicitly agreed not to cooperate until a later date, to freely explore the design space. But to freely steal each others ideas (which we did). These packages are easy for one person to write in Julia - they are all a small fraction of the lead devs time. You will also notice those stars are not evenly distributed in time - some are very old (AxisArrays.jl and NamedArrays.jl). And LabelledArrays.jl is really quite a different thing from the others.

For coordination to be efficient for any of the devs we would need clear broadly agreed apon design goals, and we just aren’t there yet. Some want bare bones simplicity, some (e.g. me) need a lot of functionality and extensibility. We will eventually work out a compromise, but it’s not like anyone set out to be the “named dimension array” project leader. We just need some vaguely similar tools, but for quite different applications - like loading a netcdf or running machine learning models.

Myself and mcabbot at least have discussed that we need another year to know what we really need and how to organize it. But at some point in the development of the design space it will be more efficient to cooperate, and then we will probably choose to do that.


There’s plenty to comment on here (Activity on AD <-> full ML framework is not an apples-to-apples comparison, already plenty of consolidation, just because you work on AD #1 doesn’t mean you can understand the internals of AD #2, plenty of feedback comes in from outside GH (ref. @jlperla’s comment about grumpy users :stuck_out_tongue:), etc), but in the interests of not pinging 75+ people with another 20+ posts they may not care about I’d recommend a separate discussion be spun off for this and possibly named arrays as well.


Why isn’t it IntegralProblem rather than QuadratureProblem for Integrals? Wouldn’t that be more consistent?


And I would also prefer Optimization or OptimSolvers to GalacticOptim. But who am I :joy: