The State of the Julia Ecosystem

Cxx.jl

4 Likes

Well okay sure thatā€™s probably the best example. But even then, most of the features supposedly work now:

https://github.com/Keno/Cxx.jl/pull/395

https://github.com/Keno/Cxx.jl/commit/c62cb578ebde43d040e3f175ea2365b292c6f487

Iā€™m not sure I totally agree that a language launched in 2012 is such a young one but I do agree that it is and remains a WIP, as do all programming languages.

But my article is not about that. There is a requirement upon a young language, or otherwise, to attract and encourage new users and I, as a seasoned user, am afraid that this has not happened yet (?) with v1.0, which is why a took the time to draft the posting.

1 Like

I am curious how you are measuring this.

I know these things are hard to quantify, but cf some Github statistics in the recent newsletter.

Actually Cxx.jl was my choice too and a spoke of it at Mondayā€™s talk; the reason I had revoked at it was because Iā€™m recently finished revising a chapter on connectivity and when back to see if, since tagged as updated to v1,0, it now worked.

As an aside I also looked at the HTTP.jl, for a coming chapter, and while the README suggests it does not compiy with v1.0, it installed (always a good thing) and worked sufficiently for my purposes.

I was amused (perhaps not the right word) that the TOP Machine Learning package from Julia Observer is Mocha,jl, which the github page explains has been retired. As I have said a couple of times I think JO fulfils a role, but is by no means the complete deal.

My concerns are also about quality of information on packages as well as quantity and I see nothing wrong with asking which packages work with 1.0, - as a percentage or otherwise, - isnā€™t that supposedly the purpose of NewPkgEval , and which I, as an attendee to the Hackathon in London, saw displaying embryonic information on August 12th?

I showed the graph you have included in my talk on Monday. Of course there is a lot of activity on github post-JulaiCon 2018 , version v1.0 was announced then with a fanfare and a lot of people downloaded packages.

But again this is activity not compliance - it is information on the latter which I would like to see being produced and presented.

I believe a some what flippantly added the epithet: Isnā€™t Data Science wonderful?

It was after all supposed to be a fun evening.

That is a mistake in the README. It actually only tests v1.0+ these days.

https://github.com/JuliaWeb/HTTP.jl/blob/master/.travis.yml#L7

I put a PR in to fix the README:

2 Likes

I used to be in the ā€œshould have waited longerā€ camp - but I think the decision to release when they did has proven to be the right one. The core team has limited influence on package maintainers, who are all busy folks, so Iā€™m glad the core team took the lead with releasing 1.0, which spurred a bunch of packages to update much more quickly than I think they would have done otherwise.

10 Likes

I pleased to have achieved something at least.

Iā€™m not criticising HTTP.jl and the work JuliaWeb does. It is just that in revising chapters on a variety of topics I need to try (and retry) a great number of packages to see what to include and what to leave out.

I have found not to trust individual github pages.

This, after all, is the task which prompted my posting

Yeah, the Julia.jl repo that JO pulls from is less than ideal. I think there needs to be some element of manual curation. Some ā€œEditorā€™s Choiceā€ lists always exist in Spotify and the iOS app store for a reason, since pure usage metrics are always lagging. Mocha.jl is a good example of this, since it was deprecated earlier this month yet itā€™s still a >1,000 star library, so any automated system isnā€™t going to recognize that itā€™s the past.

1 Like

I have wondered if the list that Dan Segel provides on his website is infact the best; Iā€™m not really qualified to comment on this but does it for example get activity generated by the new package manager.

However any system such as this is no substitute for testing individual packages to see if they install and build, and whether the test set executes (sometimes this may a liltle dubious) - which is why is was done upto 5th August 2018

I periodically check and hope for an updated Petsc.jl (or something offering similar distributed memory functionality for scattering unknowns, creating matrices and doing parallel linear algebra). Looking at the repository, Iā€™m not sure it was ever updated for .6 though. Maintaining that was probably a lot of work, so it isnā€™t surprising that it hasnā€™t been regularly updated.

That never even did testing on v0.5

https://github.com/JuliaParallel/PETSc.jl/blob/master/.travis.yml

No worries. This is a developer forum, so being specific about the issues is usually a good way to actually get the problem solved and helps us track what problems to be solved. Thatā€™s why I ask for these details. At this point, most libraries should be updated, but documentation usually lags behind changes and thatā€™s always an easy fix if someone points it out.

2 Likes

One of the packages I constantly have problem is an old favourite of mine Winston.jl, which has claimed to drop v0.5/v0.6 support and the advice Iā€™ve been given (probably good) is to concentrate on the Plot API in the relevant chapter.

However I am quite attached to Winston - even if it is Smith and not Churchhill. It has a tendency to install/work with some of my machines and not others, all Macs - a bit like the latter personage.

Some of the database stuff is not quite there yet, especialy the NoSQL stuff; I was recently working on a version of Redis but a recently update scotched my development. Iā€™m not sure on Mongo, Neo4j etc., will see in a few weeks when I get around to it.

Most packages come with a free license and no liability to users, itā€™s a good idea to check out the code, repository, contributors, issues, etc, to familiarize yourself with the packages youā€™d like to use. Itā€™s not necessarily recommended to automatically trust code written by anybody on the internet for any language.

1 Like

JuliaPro now ships with its own vetted registry: every package included in that registry is tested and works. This registry includes all packages previously included with JuliaPro.

Even the General registry is now capped so that packages which donā€™t work on 1.0 are marked as such. The main issue remaining with that is that the resolver gives pretty inscrutable errors messages when it canā€™t find a version of something thatā€™s compatible.

Meanwhile, Julia 1.1 is on target to be released with zero package breakage. This is being actively tracked in this issue:

At the time of creating the release-1.1 branch, only 27 packages had test failures (which is as expected since 1.1 is meant to be non-breaking). The final 1.1 release will not be tagged until each package test failure is investigated and the cause of the breakage is determined to either be due to:

  1. the package depending on Base internals, in which an updated package version will be released which works with 1.0 and 1.1;
  2. a genuine Julia API breakage, in which case that breakage will be fixed/reverted.

The list of packages for which this has yet to be done is now less than 10.

14 Likes

If you want to ā€œpinā€ to a specific set of versions, eg for a book or a set of lecture notes, you can create a project specifically for this task and commit the Manifest.toml.

Some simple CI for the functionality that you rely on would ensure that you always provide a working set of versions. And you can of course update these etc after the notes/book are out.

As I said I was reluctant to mention J-Pro as this was not the main thrust of my article and I feel would sidetrack the comments I was trying to make

But as Pandoraā€™s box is open, are you saying that future downloads (of J-Ppo) will be shipped with a package bundle?, that would be extremely helpful.

I am aware of pinning versions and not really worried about the impact on myself, other than the time involved in reviewing a set of packages on a per topic basis - graphics/databases/machine-learning etc - but thanks for the advice.

However it is my intention, for the time at least, to ship the code as a series on Jupiter notebooks / Juno scripts, and these need a clean standard download for the readers to work with.

Iā€™m currently working on C8 (of 12) and already (possibly) will have to amend what I have said in a couple of places; one such example is the suggestion that J-Pro is useful because it comes with a guaranteed package bundle.

1 Like