Discussion on "Why I no longer recommend Julia" by Yuri Vishnevsky

We can debate that. Quadrature is a more direct way to say “numerical integral problem”. Other areas don’t make as much of a naming distinction between numerical and analytical problems (“solve an ODE” can be symbolic or numeric, while “solve a quadrature problem” is clearly numeric and “solve an integral” is ambiguous).

8 Likes

I’d happily entertain this discussion. Now that everything is coming together, inconsistencies like this are becoming more apparent, so Optimization.jl sounds probably better as all other solver packages are explicitly named.

16 Likes
NumericIntegrals

than? (Or something on those lines?)

To be truth, although having used numerical integration many times (frequently just writing my own code), I don’t remember the term Quadrature as something very common.

4 Likes

But if you pass in a problem to get an integral of and there is a cheap symbolic integral, wouldn’t you want to use that?

1 Like

True, and the difference between symbolic and numeric is eroding these days.

(which coincidentally was just accepted for publication about half an hour ago :sweat_smile:).

I think the better argument might be just that, the interface should be as simple an consistent for the least common denominator. I.e., the newest user might search “integral” first, so it should be IntegralProblem, Integrals.jl, and Optimization.jl. We can take this discussion to another thread, but if everyone seems to be in agreement then I’d be happy to make this change this week.

(The one thing that bugs me here as well is that Integral equations are a completely different thing, which someone interested in differential equations might look for, and much look for Integrals.jl since hey it’s the same group as differential equations, and get confused looking for integral equation solvers.)

22 Likes

:scream:

What about things that SciML dont do well? (things like gridap, trixi,…)

1 Like

That’s what more grad students are for :slight_smile:

4 Likes

That’s somewhat of a joke, but also somewhat true. Why is it that today you have to know what the finite element method is or what a one-dimensional conservation law is, in order to pick up the right PDE solver package, dig through how to use it, and then get a reasonable solution? If I wanted to answer the question “on this PDE, what’s the best way to discretize it?” numerically, why do I have to learn 10 PDE solver packages and write thousands of lines of code to get them all working? Why can’t we just specify a PDE once, symbolically, and shuttle it to all solvers?

In SciML we spend our time doing multiple things:

  • Build more generic fast primitives
  • Build generic fast solvers
  • Create interfaces over sets of solvers to give a common API

With the latter, we always wrap what’s around: Sundials.jl, DASKR.jl, SciMLNLSolve.jl (MINPACK, NLsolve.jl), etc. We always reuse code. But I don’t believe the way forward is to have every single solver have a different API. If Sundials.jl did not play nicely with OrdinaryDiffEq.jl, the benchmark suite of SciMLBenchmarks.jl just would not exist.

So what do we do with the things we don’t do well? We wrap them. I don’t believe the way forward will be to recommend to an economist that they go pick up a few books on finite element methods to then start using Gridap.jl. Can’t we just give a generic symbolic interface where they define their equation at a high level, and have it work with finite elements, finite differences, finite volume, pseudospectral, physics-informed neural networks, DeepONets, Fourier Neural Operators, DeepBSDE, etc. methods?

That’s what we’re building with the generic PDESystem interface, and over time it’s what I’ll be pushing newcomers to more and more. It’s not even too crazy: Mathematica has a (primitive) version of it. I honestly believe that someone should only have to use Gridap or Trixi directly if they are doing research in the subject or really want to squeeze every last drop of performance out, otherwise we should be able to shuttle equations to those packages in a nice high level way where a user does not even need to know what “finite” means.

SciPy is brought up as an example, but I don’t think that even hits the goal that we should strive for. Mathematica is probably closer in how uniform and automated it is. And that should be no surprise because of the centralization afforded to the Mathematica developers. I want an ecosystem where someone could change to a completely different numerical realm but be able to guess what all of the arguments will be named and be correct. I want an ecosystem where everything has a symbolic layer that mixes analytical solutions and fast paths automatically, so that answering Discourse questions on ways to do things better is a thing of the past because the best route is automated code generation. I want an ecosystem where there is one documentation and everything is closely linked, where the solvers for differential equations use a well-defined interface for linear solvers, and where everything acts the same with respect to generic programming and differentiable programming.

SciML isn’t there yet, but if we work hard enough it will get there. Then yes, I don’t think I’d recommend most users go directly to the solver. You don’t recommend users to call ARPACK directly, you tell them to use eigs. You don’t recommend that users call LAPACK dgetrf, you tell them to use lu. Right now we, and the Python ecosystem too, are missing the level of abstraction.

48 Likes

Can I emphasize again the importance of the issue raised by @ParadaCarleton? Do we have more proposals to improve the fragmentation in the ecosystem as a whole?

@ChrisRackauckas although I agree that the situation in the PDE world is much more evolved thanks to you and other contributors, that is far from the reality of other ecosystems in the language. I ask you to consider the whole set of users of the language, beginners and intermediate users, who sometimes will never use PDEs in their work.

That is the viewpoint we need to keep in mind in my opinion:

…we are learning a new language and are confronted with 6 competing packages to achieve what we want (e.g. Flux.jl, Lux.jl, KNet.jl, Avalon.jl, …). Every time we try one of them with a slightly different example from their README, we find a new bug. We are disappointed and unsure if we should migrate 100% of our projects to this new language.

Regarding interfaces, I feel that they are not the solution to the social problem we are facing here. The fragmentation is happening because of other reasons that could be (1) difficulty to discover existing packages, (2) lack of collaboration guidelines in existing packages, (3) inefficient contributorship systems and packages that reside in personal GitHub accounts instead of in organizations, etc.

If we don’t solve this issue or at least think actively about it, we will never scale to the point of addressing the problems raised by Yuri’s post, which started this thread.

8 Likes

I really don’t think there’s a fragmentation issue in terms of packages. Julia has 7,700 registered packages, PyPI has 378,000, that’s 49x more.

(The vast majority of the Python packages were released in Julia’s lifetime so “Python is older” doesn’t explain the gap.)

7 Likes

@cjdoris 's point is spot on.

How come Python isn’t “fragmented”? Look at ode · PyPI, that package looks pretty terrible if you dig into it. How is that the “ode” package of Python? Don’t people get confused?

No, everyone goes to use SciPy. Over time, a few emerged as good things to used, and the vast majority of packages are just largely ignored. Creating and trying new things isn’t an issue, it’s just one more chance at hitting a jackpot good idea. Everyone should try it, few will actually work out, and that’s fine. People said the same thing about Android early on, people said the same thing about Python early on, and it’s always the same everywhere: an open system will naturally have a few come to the forefront and escape from the noise.

So on that note.

I think you vastly underestimate the power of the individual in an open source community. In fact, I would even go to say that the issue is completely opposite to what you’re saying.

Collective action isn’t what generally gives uniformity in open source communities: strong individual contributions does.

People for years complained that there wasn’t one clear interface for dataframes in Python. Wes McKinney solved it with only one other real contributor by building Pandas (and yes, there was an entire issue back in 2019 about how the entire backbone of Python data science was held up by two people). Open source scientific libraries all have a decent enough BLAS implementation because one guy did the vast majority of the work to build something called OpenBLAS (who then passed it onto one guy, and then passed it onto one guy). LINPACK, which lead to the standardization of interfaces in LAPACK, was three people. Bjarne Stroustrup made C++, and yes a committee then has guided it for decades now, but if you point to things that “look weird in C++”, those are the things that came about by community vote. Why did Python’s core library look so uniform? Guido van Rossum as BDFL took it upon himself to look at the full interface and make it uniform.

My point is, if you look at examples across open source, you will find that like 99 to 1 of the examples of something coming out as good and uniform had one major driver/contributor championing it. You don’t get uniformity by having 1000 uninformed people vote on small pieces that they don’t know much about, you get uniformity by having one or two people who know the entire system and look at the forest instead of the trees and say “this is how I would do it differently”. And when those solutions are good, they bubble up. Those passionate champions answer the first 10,000 questions about the topic on StackOverflow, write blog posts about the “right way” to program in X, help everyone in chatrooms, and soon a whole community is formed with a search history about a clean and clear vision about how X “should” be done. How the interfaces are defined, what is in scope and out of scope, what pieces matter and which pieces don’t.

So let’s pull this back to Julia and the blog post of the OP. The issue is not that 1000 people haven’t voted on the right interfaces for JuliaStats and come to some agreement on how to do things, the issue is that no one is driving it. Clustering.jl requires data in columns for some methods and data in rows for other methods: someone just needs to choose how they want it. Should every stats function take in a DataFrame? AbstractTable? If you do statistics, what’s the right end-to-end workflow? What plotting library should you be using, and why? Who has posted so much that after a breaking change, they can spend a day just updating all of the old StackOverflow and Discourse posts so that way the Google history acts like the break never even happened?

Anyone who reads this can do it themselves. You can do it! Don’t wait for collective action here. Pick a name, “EasyStats.jl”, write package, build off of what people have done, write the tutorials, promote it to all hell and back, tell people why this is the right and consistent way to do things, write the style guide and enforce it in every PR, fix every bug people post online, review every PR within an hour of it being posted so that you can ensure it matches the vision, hand hold newcomers and train them in your style, lift up students who are interested and get them funded to continue working on the pieces they are passionate about. Do this for 5 years. Hell, there’s even a ton of help waiting for you. Did you know that every single year that I’ve done GSoC for JuliaLang (I started admining around 2016) we have turned away 2-3 good students in statistics because no one stepped up to mentor them? That’s 2-3 paid positions each year that go unfilled because no one wanted to step into the drivers seat.

If you want to change Julia for data science, there’s Ferrari waiting for you. Just make sure to buckle up and know what you’re about to take on. I’ll cheer you on, but I have a large enough scope (maybe too large) so I don’t plan to build a data science push. But I’ll let you know, it’s a fun ride so be ready for an adventure.

82 Likes

Python ecosystem 1000x more developers/users?

3 Likes

The number of users doesn’t matter one way or another for package discoverability. The point is that having lots of experimental packages isn’t a problem for the ecosystem as long as there’s at least one excellent, high-visibility package for each use case.

4 Likes

yeah agreed, not yet :slight_smile:

Fully agree. But why we do we need to start it all over as a completely separate effort instead of in a branch? Or by opening an issue and discussing with others if the idea is welcome first before starting a new repo?

I don’t. I am 100% aligned with the idea of leadership in open source. There must be someone responsible for each initiative, driving the directions and organizing the mess. Unfortunately that is not happening, this is an issue.

Fully agree again, but I think that “existence of a leader” is just one component of an effective solution to fragmentation. We have examples in Julia of various projects with leaders that are still suffering from lack of maintainers. Potential maintainers simply decide to start a completely separate effort instead of contributing.

4 Likes

Interesting point about the fragmentation.

Several ideas, from my point of view:

  • In Julia, there are several great approaches to unify functionality, as MLJ for ML libraries, or GalacticOptim.jl for many optimization problems. I think this is a great option to allow researchers to create small packages but maintaining a common API with similar ones.

  • In Deep Learning we have mainly Flux and FastAI that add higher functionality over Flux (so, both are needed), and other alternative likes Knet and another less populars. It is true that there are many packages for processing the data and not with good names (as MLDataPattern.jl or MLLabelUtils.jl), but now they are working in unifying them, like MLUtils.jl.

  • We have IJulia for Jupyter and Pluto, both are very different in its design.

  • In Plotting, we have Plots with several backends, and Makie, also very popular (with AoG). I do not think there is fragmentation here, they are very different.

  • DataFrames is the alternative to pandas, and we do not need Numpy in Julia. I use DataFrames a lot, and it is one of the great advantages of Julia, in my opinion (pandas work great, but its API is not intuitive for me, I always have to refresh the documentation).

  • About Metaheuristics, my research topic, there are actually several packages, many for different algorithms, but there are getting consolidated others like Metaheuristics.jl (many competitive, less flexible) or Evolutionary.jl (focused in genetic algorithm, very flexible). I hope with the time the small packages implementing a only package will not needed.

  • About statistical tests, we have HypothesisTests.jl and Pingouin.jl, both are nicely with different interface.

Thus, in my experience, there is not so much fragmentation, I am more concern about documentation of several very used packages, and about error messages. Also, there are public meeting of developers that could be used to create more synergy between authors/developers of different packages.

4 Likes

Why a branch that almost nobody can install instead of a package that people can start using? If you know it’s a good idea because you’ve been thinking about for 20 hours a day for the last few years, why would you let someone who hasn’t thought through all of the implications block you from executing your master plan? Why not just do it, and ask for help along the way?

Like take today for example. I know Julia needed a package that tames the insane number of ways to do nonlinear optimization, I know because I needed that package. I had a research project that used Evolutionary.jl, NLopt.jl, Optim.jl, MathOptInterface.jl + IPOPT.jl, and BlackBoxOptim.jl all individually. I knew what I wanted to build. Did I get it right the first time? No, GalacticOptim.jl had issues with Requires. A few interface issues. I threw the package up there and started churning, took in the feedback. Today it’s changing its name from GalacticOptim.jl, a silly pun on having multiple global optimizers, to Optimization.jl because of feedback from this thread and a quick poll on Slack (16 upvotes and no downvotes in a night, that’s enough to pull the trigger IMO).

Could I have just opened a discussion on Optim.jl saying “hey guys, I think I want this, do you?”. Yes. Not only “could” I have, but I did. And it went exactly how committee discussions go: you get a lot of great ideas that you need to think about, but no final answer. Look at how it closed: I just took a stab at doing it.

There are so many things that one could be doing right now to address some of the pieces of the post. Let me just list a few that come to mind:

  • Automatic differentiation correctness. Julia’s main advantage in the AD space is that everything works on the same representation (the Julia IR and Base libraries directly), and thus swapping AD libraries is simple. Someone with the time and effort could do a major push to make AbstractDifferentiation.jl the main entry point for AD, make all AD libraries extend some AbstractDifferentiationCore.jl, and say that the right way to do AD is to always check with two libraries and pick the fastest one. Hell, that’s an even safer route than something like PyTorch or Jax where you just have to trust it’s right (and BTW, every AD has correctness issues).
  • In terms of actual ADs itself, Enzyme.jl is a two person project right now, and I think it can solve a lot of the issues we currently have with AD (it already does solve a lot of issues, which is a good sign). That’s a good train to jump on if one doesn’t want to take lead.
  • Deep learning. I’m glad @avikpal took up the charge with Lux.jl because I think that’s the deep learning library that we needed. Something fully compatible with ImmutableArrays, ComponentArrays, and is fully explicit in the parameters so that any optimizer from GalacticOptim (oops Optimization.jl) can be used. I also think that, for very different cases, SimpleChains.jl is a fantastic project to explore in more depth.
  • Plotting and statistics. People keep the two separate, but any statistics or data science workflow uses the two together. Why not build a statistics library that has every tutorial with AoG and every input takes a dataframe? That would be a major statement on “the right way to do statistics”, but I think a lot of newcomers would benefit from this world of stats.

And note that none of this is throwing everything out the window. Doing these things does not disrespect those who built packages before. AbstractDifferentiation.jl builds off of the existing AD packages. New deep learning packages like Lux.jl use the same bits like NNLib.jl, CUDA.jl, Optimisers.jl etc. New statistics ecosystems can use AoG, DataFrames.jl, HypothesisTests.jl etc. as building blocks. It’s still working with the ecosystem, not working against it. It’s just not waiting for everyone to agree before making your move. Which is better because you’ll never have everyone agree.

50 Likes

@ChrisRackauckas I agree again but am still concerned with end-users in the middle of all this experimentation. Your mindset is on the developer side. I agree that we should take action and do things when we feel that waiting for a response is less productive, but your approach of opening an issue and requesting feedback may not reflect what is happening in other places. There is a lot of unexplained duplication in my opinion.

We can close our eyes to this issue, but I don’t think that is wise. The Julia ecosystem is currently dependent on a few 10 to 20 folks maintaining tons of packages, and if some of these people get hired to do things outside of Julia, we lose considerable workforce. That happened before and will happen again. This competition of packages at this point in time is too risky to afford (even though we may disagree :slight_smile:).

4 Likes

There’s some logic to this. People can only be good at a certain number of things at the same time. I use R, and it works well, but I never seem to get around to using Python much, a much more useful skill.

2 Likes

Working in new packages is what is good for end-users because the only other option to make major changes is to break existing packages. There’s no need to force your changes onto users by breaking existing packages: just make a new one, and when it works, share it. End-users get to have evolution of the package ecosystem but can update at their own pace.

And it will keep happening. Hell, Python has a huge issue with getting developers: it relies on about 10 people to cover the majority of NumPy, SciPy, and Pandas. The reason people don’t mention this that often (anymore, about 5 years ago this was seen as a major issue and every conference had a BoF about it) is because these big packages rarely break these days. They are built, set in stone, and the maintenance just kind of happens. Python changes by adding new packages, and hundreds of thousands of them get ignored every day.

If you want to increase the number of developers, tying them down is going to have the opposite effect. Requiring that everyone agrees on a solution before anyone can work is not how you get more people working. Lower the barrier to entry, increase the number of paid positions, promote the work of newcomers, and help anyone who wants to get started. If you’re a postdoc, get 10 grants so you can hire some people to work on open source. Support Julia-based companies, advertise for Github sponsors. I don’t know, find advertising revenue from a Youtube channel? Whatever works. Just get more people going.

Everything naturally helps. Saying people need to work on the same end-user package to contribute to the ecosystem is a way too narrow view of how it all grows: you can have disagreements about APIs, names, and documentation style while sharing functionality. The Lux and Flux development may not look similar, but at its core, the devs are both going to be making NNLib.jl and Optimisers.jl better. The work on Enzyme has led to GPUCompiler.jl updates to improve CUDA.jl for Flux users who don’t use Enzyme. Diffractor.jl has led to improvements in the compiler plugins infrastructure which are essential for Enzyme’s performance.

If you let a tree grow, its leaves will naturally move towards the light. But if you tie a tree down to force it grow to the right, knowing that may have more light, you might kill it. Instead of tying it down, try giving it water.

40 Likes