Apologies, this reply has gotten much longer than I originally intended!
TL;DR: Once you’re familiar with “regular” Rust, take a look at async
and how it works under the hood. At the very least, the differences in usecases this was designed for should definitely further your understanding of general programming topics. So my take is that “knowing Rust” not only makes you a better Julia programmer, but a better programmer in general. Not because Rust is so much better than Julia, but because it exposes you to different paradigms.
One Rust API that’s probably pretty unknown to Julians that I think follows the same trend as the IO API mentioned above (in terms of being at least designed) is their async API (although I’m very sure that this is a controversial take ).
Async functions in Rust don’t run at all until something actively polls them. In essence, the use of an async function in Rust is lowered to an implicit statemachine, where waiting on a task is equivalent to transitioning to another state in that statemachine. This allows custom so-called executors to be written that can make different decisions in how the tasks are run & scheduled (e.g. think of an async scheduler on an embedded system, which has very different requirements from one designed for web-request throughput!). This also means that this implicit statemachine can be (up to a point) manipulated to provide additional guarantees, e.g. by guaranteeing a specific (even reproducible) order of execution.
This design leads to the very curious result that Rust-async can compete with actual RTOS in terms of performance on embedded hardware! For some more information on how & why this is, see this blogpost:
The current Julia runtime/@async
machinery can’t even begin to compete here, for a number of reasons that I won’t get into here so that this reply doesn’t grow more than it already has
Julia runs @async
blocks eagerly and everything is hidden behind the actual Julia runtime, which means that the implicit statemachine can’t be manipulated and scheduling decisions depending on the blocking state of individual Task
objects can’t be influenced. I tried writing such a Rust executor once for Julia, but ran into the problem that not everything that blocks a Task
yields back into the scheduler (write
on a non-stdout
/stderr
file doesn’t, for example). Not to mention that the Julia scheduler wouldn’t be aware of this either, which means that when it resumes a previously blocked task, the task wouldn’t then call back into my own scheduler for the actual scheduling decisions.
Further, composing async
functions in Rust feels a bit easier than doing the same in Julia with @async
/@spawn
, where fetching the result of a Task
is necessarily type unstable at the moment, which also means that composing tasks can often be much slower than necessary due to the forced dynamic dispatch.
Of course, that’s not to say that the async API in Rust is perfect - far from it. It’s certainly a difficult topic, and I’d wager most Rustaceans would agree that the current state of async is not at all satisfactory. Still, I’d like to think that even with all its warts, it’s a bit closer to “good API” than what we have at the moment. If you’re interested in what the current state of async in Rust is, this is a good in-depth summary:
Although you might want to wait with diving into this until you’re more familiar with Rust - quite a large part of the issues of async
in Rust are due to the interactions with the borrow checker and lifetimes, so having a fairly good grasp on that is pretty much required to understand all of the nuances.