Chapel vs Julia (and compared to Python/Numba, and C+OpenMP)

First, are Julia’s threads still “experimental”? The paper claiming it, published in September, seems to have been been first out in June, so that may explain it. More importantly, are Julia’s threads “not yet mature enough” (and Chapel’s better)?

I thought Chapel’s syntax and semantics were stable, but you may find their change from 0-based to 1-based intriguing (or their reasoning), they’re otherwise arbitrary-based (like Fortran, and Pascal, need to specify start, end). This is a special case, and it’s probably mostly stable.

https://chapel-lang.org/whatsnew.html

Novemer 16, 2020

[For some reason, and not yet done at its Wikipedia page, Cray no longer supporting?]

  • Removed the “Cray” mark from the version of the Chapel logo used on the website.
    […]

October 8, 2020

  • Added a mention of Chapel being named a Bossie 2020 Award Winner to the front page [I believe Julia also got Bossie award, previously.]
    […]

October 7, 2020

June 25, 2020

  • Added a new journal paper comparing Chapel, Julia, and Python/Numba to OpenMP by Gmys et al. to the publications and papers page.

April 16, 2020

https://www.sciencedirect.com/science/article/pii/S2210650220303734

In terms of parallel performance, the multi-threaded loop-level parallelism provided by Python/Numba and Julia allows us to speed up computations, but their multi-threading support (experimental) is not yet mature enough to compete with OpenMP, especially for very fine-grained tasks. Chapel’s task-based parallelism, on the other hand, scales nearly as well as optimized C/OpenMP.

https://chapel-lang.org/papers.html

Abstract for the one above:

This paper compares Chapel with Julia, Python/Numba, and C+OpenMP in terms of performance, scalability and productivity. Two parallel metaheuristics are implemented for solving the 3D Quadratic Assignment Problem (Q3AP), using thread-based parallelism on a multi-core shared-memory computer. The paper also evaluates and compares the performance of the languages for a parallel fitness evaluation loop, using four different test functions with different computational characteristics. The authors provide feedback on the implementation and parallelization process in each language.*

You would think with the version number 1.23 (26th release) it’s stable: https://github.com/chapel-lang/chapel/blob/release/1.23/CHANGES.md

library:

  • added new ‘Heap’ and ‘OrderedSet’ modules

[…]

the mason package manager:

  • interactive modes for mason new and mason init
  • improved ergonomics for mason search , test , build , and publish
  • bash completion support
minor improvements to Python interoperability

Deprecated / Unstable / Removed Language Features

[lots, e.g.]

removed string vs. bytes comparisons
[…]
removed support for C++-style deinitializer names e.g. proc ~C()
removed support for vectorization hinting to the C backend

Most of the threading interface was declared stable in 1.5 which was late August or early September (forget which)

1 Like

Chapel’s threading model is not as “advanced” as Julia’s, as in it doesn’t have the whole task-based scheduler IIRC. But at the same time, that task-based scheduler does have overhead, and it seems that causes the overhead to be much larger in Julia than Chapel. This is something actively being worked on.

That said, Chapel’s strength is parallelism is its distributed model. I think it handles distributed computation much better than Julia’s Distributed. A lot of this seems to come from using distributed data iterators, an idea that @tkf is exploring with FLoops.jl (which I so want to be called FruitLoops.jl)

11 Likes

It depends a lot what type of parallelism and features you want. Right now julia’s support is pretty barebones in terms both of interface (you have to recode eg parallel reductions yourself, although there are packages that help a bit) and performance (so anything relatively fine-grained is very much out of the question). If you just have a for i=1:100 a[i] = do_something_expensive(i) it works pretty well (just as in any language in fact, with the added caveat of the dreaded closure capture in julia). Julia’s threading model has set pretty ambitious ideal goals that it still has to deliver IMHO. Hopefully it will!

4 Likes