I am (once again) exploring Julia.
The last time I checked it was, I believe, in 2015 and many things have happened since then.
I am writing this post in the hope that you can help me correct a few of my current impressions.
Bear in mind that my impressions are based on relatively quick research.
If you are trying to decide whether building a Julia prototype for our use case, just like I am, you shouldn’t read too much into my opinion.
Similarly, I hope no one feels insulted because I got the wrong impression.
Great
Little detail since folks here will probably agree:
Concise math syntax
Jump (plus other optimization packages)
ODE solver
Plotting: I like the Plots.jl meta-package even though I gave up on getting PyPlot running for now. GR, pgftex, and ascii plotting should do for me. pgftex and ascii plotting are actually features that I’m looking for in other languages now.
Unclear but promising
Dataframes, Pandas.jl. Lots of discussions in that area that don’t have a clear outcome. Lack of object methods also makes pandas look “unfamiliar”. Overall, I feel like this area is in flux.
Web: Escher looks like Shiny. It warns about beta quality but still looks promising. Genie seems to be a full MVC framework with ORM. JuliaWebAPI.jl exposes functions via ZMQ or HTTP listeners. Both should be sufficient to get a REST API up and running, I suppose.
Worrying
Testing: It appears that many popular libraries have failing tests judging from their Github repos. What is going on there?
Databasing: PostgreSQL.jl is currently unmaintained and I don’t see an obvious alternative. Basically, I am having serious doubts that I could quickly get my Postgres data into a Julia application. Are there databases with better support?
Deployment:
BuildExecutable looks like I can share my script with someone who doesn’t need to install Julia himself. Would he still get JIT compilation on every start though?
static-julia looks quite promising for building applications but also libraries to integrate with C, C++, or Python. On the other hand, that project is also quite recent and very active. What are my odds for building a Python wheel based on it (binary library that requires no Julia installation on the client computer)?
Again, I hope I am not being unfair with this potentially premature assessment.
I’d appreciate it if someone could help me shed some light on the areas that I misunderstood.
Deployment and databasing, in particular, feel a bit like roadblocks to me.
Testing: It appears that many popular libraries have failing tests judging from their Github repos. What is going on there?
Most popular packages seem to work fine with the stable release (0.6). (If you discover otherwise, it’s a bug worth reporting.) Most packages test against multiple versions of Julia, including nightly which is the in-development version of Julia. On the road to 0.7/1.0, we’re cleaning up (aka breaking) a bunch of stuff, and nightly changes so quickly that presumably the majority of packages are broken. Once we get closer to releasing 0.7/1.0, more packages will start supporting nightly. Once 1.0 is released, one should expect even nightly not to break stuff.
Basically, I am having serious doubts that I could quickly get my Postgres data into a Julia application. Are there databases with better support?
ODBC is probably your best bet currently.
DataFrames and related packages (including ODBC.jl) are currently going through changes. It seems to be getting close to a release after which they should be easier to work with and not expected to change so much any more. Note that this is my impression as an outsider to this development.
Ok, that is sort of the impression that I also have about database access.
Would it be easier to download things over a REST connection that I put in front of postgres? https://github.com/JuliaWeb/Requests.jl looks promising.
To plug a package of mine, JDBC.jl has actually proven to be surprisingly stable. People have used it to connect to all sorts of esoteric databases. You’ll need java on your machine, but hopefully that’s not too much of a blocker.
On the badges, that is something I’ve been thinking of. The badges help package developers track what needs to be done, but does indeed send the wrong message to new users, as evidenced by this thread. And we can clarify, as we do here, but many users will get this impression and never return. So I’ve been leaning towards not testing on nightly on Travis for my packages.
I’ve used this on a few repos, but someday I’ll once again care about the results on nightly (truth be told, I do now, I just don’t always have time to get to fixing the problems). From past experience I know I’ll forget to remove those allow_failures lines. I’ve decided that the appearance of failures is a small price to pay in exchange for the extra safety of not merging something I shouldn’t.
Your package is out of date. PackageEvaluator tests the last released version of your package, not master. Make sure you've tagged a version with your bug fixes included.
I don’t understand why you say that. Just because they are set to allow_failures doesn’t mean you don’t have to care about them. It only means that end users don’t have to see a failing badge. It’s still possible to actually click on the thing and see the test results. So I’d say it’s better to allow_failures and then manually check the travis results. Maybe this is different for larger packages with more contributors, but for my own package I always click the travis tests in a separate tab each time I commit to ensure the tests run smoothly. But I haven’t had time to play around with 0.7 yet. What I’m trying to say is that the developers should probably be clicking on the travis page anyways to monitor the tests, while the badges are most useful for end users who view the pages for first time.
It’s not just nightly. You get “passing” only if it succeeds on all tested versions. Even Travis flakiness (“no output has been received for 10 minutes…”) leads to a failing badge.
I don’t understand why you say that…I always click the travis tests in a separate tab each time I commit to ensure the tests run smoothly
I try do the same, but I don’t want to merge something because I forget to check all the platforms. Really, not turning it off is just calculated laziness on my part. Even if it only takes 2 minutes per package, that’s…I don’t know…something like a couple of hours of mindless work? Doesn’t seem worth it.
The irony about testing and first impressions is not lost on me.
After a bit more evaluation, I think ODBC should work for me. It currently has an issue with SQLite which I was testing but it hopefully will work with Postgres.
JDBC also looks promising but iterating rows is less comfortable than directly extracting data frames.