Tomorrow I have a presentation concerning my package, but questions about Julia will surely arrive and likely about Mojo.
I haven’t tried it, to those who did it (recently), what do you think are the user cases where Julia is a preferable platform and which one where Mojo is ?
I suggest you check the following topic. If you have, then please ask more specific questions to avoid repeating the opinions raised in there.
Sorry for a low-effort post, but some very obvious non-language concerns:
-
Julia is more mature, having reached version 1.0 more than 5 years ago. Mojo is in the alpha testing phase.
-
Julia has a much larger ecosystem. (OK, Mojo can in principle use CPython packages, but such packages will run at CPython speeds not Mojo speeds.)
-
Julia is completely open source.
Mojo is more like Rust than Python, and so all of the advantages and disadvantages of a high level language like Python, MATLAB, Julia or R vs a language like C or C++ apply.
Whatever you do I suggest not being too defensive about Mojo. It’s a bit early to tell anyway, and it’s quite possible that Mojo will prove a brilliant alternative to Julia in various ways. But we’re not there yet.
Personally I think Mojo might be a great basis for large libraries like we have in Python (numpy, scikit-learn, etc.) while Julia will maintain its edge in terms of composability of unrelated packages. Also I like the unified design of Julia, vs the mix of two worlds that Mojo implements (e.g. with fn
vs def
).
yes, I saw that Mojo doesn’t support multiple dispatch and a feature request in this sense has been closed…
can u explain more
It’s a statically-typed language without a garbage collector and requires that you do manual memory management and satisfy the borrow checking. Let’s look beyond the marketing: does that sound like a high-level language like Python? Yes, it’s a whitespace sensitive language, but trivial spelling differences do not make a language but the basic semantics and action do, and in that sense it’s nothing like Python.
Yes, you can omit this stuff, but then it will actually call a CPython interpreter and get CPython performance. That effectively means that there’s two backends, one which is Python and the other which is Mojo, and the Mojo one acts like Rust but the fallback + being whitespace sensitive is why it’s claimed as close to Python. However, if all of the code you actually have to write does not act like Python, or if it does it does not use the Mojo compiler, then is it really all that close to Python or is it a Rust-like language with a PyCall.jl feature baked in? I’d say it’s the latter because I would focus on the language, its semantics, and its features rather than just whether it ends for loops with a lack of indentation.
On the plus side, Rust is growing in popularity, is more memory safe when it comes to data races and consistently ranks very well on benchmarks even among other statically-typed languages. I am not much of a Rust fan myself due to its complexity and boilerplate but I think what may end up happening is that Mojo might lower the barrier of entry further for people with no systems programming background (most Python programmers) to help them dive into systems programming concepts (e.g memory management) to get some extra speed. If Mojo can avoid some of the complexity and boilerplate of Rust, it might be a good competitor in that space.
At the end of the day, Mojo will be a tool. People use tools to get jobs done. If the fastest and most flexible ML framework is written in Rust, more people will be using Rust and the framework will be wrapped in other programming languages including Python and Julia. If the Mojo team creates a language that makes it possible to: 1) write the fastest ML code (natively or based on their commercial MAX engine), 2) re-use some of the PyTorch/TensorFlow codebase and most of the tutorials, 3) easily call it from Python using Python syntax and a unified JIT compiler, and 4) easily deploy it on cloud, accelerators and edge devices, then more people and companies will likely flock to Mojo. But I guess we will see. We may be many years away from this during which the Julia devs may even choose to introduce Julia’s own no-GC, Rust-like dialect if Mojo proves a success, or allow for more Zig-like manual memory management with custom allocators (like Bumper.jl) and deferred de-allocation.
requires that you do manual memory management
Not so true. The Mojo compiler automatically inserts memory release code at the right positions based on ownership rules
Rust and C++ do this automatically, too.
C++ just doesn’t have the borrow checker (which makes it both a little easier to get code working and much easier to get code compiling-but-not working in C++ than Rust).
Managing memory in languages without GC but that do have some form of scope based resource management (SBRM, aka RAII) isn’t that hard.
You should (almost?) never manually call malloc/free (or new/delete, or w/e) in a language like Rust or C++.
These aren’t C.
If you care about performance, a GC language requires much more manual memory management to work around the GC.
If performance isn’t a priority, a GC doesn’t take any work from the developer.
Enough people complain about satisfying the borrow checker in Rust that it obviously isn’t trivial, and if someone has a hard time satisfying it in Rust, that might be a reason to be suspicious of code they wrote in C++: most C++ code should still follow all the rules Rust’s borrow checker enforces, you just don’t get one.
Mojo does have a minor innovation here, which is that instead of normal SBRM, where destructor calls are inserted at scope exit, they can insert them earlier, immediately after last use.
I’m not sure how this is implemented, but hopefully it doesn’t preclude uses with other kinds of resources. E.g., in C++ to acquire a lock, you construct a lock-guard that frees the lock on scope-exit. You don’t want that to be moved earlier!
The advantages of being able to move them up are
- Potentially greater memory re-use and lower total memory use, thanks to earlier freeing, especially in big scope blocks. An obvious example where this could be big is if you chain a bunch of linear algebra operations.
- Inserting destructor calls at the end of a function call precludes tail call optimizations.
IMO, Mojo’s biggest innovation as a language is that you don’t need awkward FFI to call a static language from a dynamic one.
Languages like C++ and Rust are great in many ways for lots of tasks, but writing code you want to call interactively is a weakness. You need lots of boiler plate wrappers. Mojo solves that. This could be huge, IMO.
A non-innovation that is important is that, apparently, you can call anything pythonic and people will just believe you and think it is easy. Marketing matters.
Instead of normal SBRM, where destructor calls are inserted at scope exit, they can insert them earlier, immediately after last use.
That means Mojo is very different from C++ and Rust.
a minor innovation here
It’s also easy to call anything “minor”.
Freeing earlier is “very different”?
Seems like just a matter of removing a semantic guarantee, and then reordering.
I think C++ has too many guarantees around allocations and deallocations being considered observable and thus untouchable by the optimizer, which is the primary thing stopping Clang from being able to do the same there in many cases.
EDIT:
Actually
The “unspecified when and how” wording makes it possible to combine or optimize away heap allocations made by the standard library containers, even though such optimizations are disallowed for direct calls to
::operator new
. For example, this is implemented by libc++ ([1] and [2]).
https://en.cppreference.com/w/cpp/memory/allocator/allocate
It sounds like clang or gcc should be aloud to do such things, making it odd that they don’t. Standard containers, and other containers implemented using std::allocator
rather than operator new
should be able to be optimized in this way, but AFAIK they aren’t.
Seems like just a matter of removing a semantic guarantee, and then reordering.
What about references.
References just safer and more convenient pointers. Compiler Explorer
I’m not asking for the definition but how “free early” could be such a trivial thing to implement while you have references with their own lifetimes.
Track uses of the pointer? LLVM has tons of infrastructure for this. Look at ScalarEvolution, for example.
LLVM can track escape (nocapture
parameter attribute); obviously if a pointer escapes, you probably shouldn’t move the free
earlier.
Julia doesn’t have a borrow checker, but it doesn’t mean “compiling-but-not working”. The code will always compile to correct code, at least if assuming single-threaded, the main problem I see is multi-threaded, and then possible race conditions. You can even have multi-threaded and no race-conditions, depending on what you are doing.
You say SBRM “isn’t that hard”; but with GC even easier, regarding just releasing memory. For speed (early freeing) harder, without manual intervention like Bumper.jl, but it seems to me Julia could do similar to Mojo automatically (i.e. without code changes), and tell you, in the REPL when it deallocates as early as in Mojo (or rather tell you when it can’t), earlier (in practice) than in C++. In C++ destructors must run in deterministic order (though seemingly it doesn’t rule out freeing earlier, as early as Rust, just a matter of good optimizers…).
Just to be clear I think you mean it’s easier to call from dynamic, e.g. Python or Julia, to (static) Mojo. Not from Mojo to any other static language.
I don’t see why it’s it as difficult to call from Julia to e.g. C, or better Zig, or Mojo I guess. I don’t think you claim is it’s easier to call from Mojo, then e.g. from Julia or C++. It can be hard to call to C++ (e.g. because of name mangling, and maybe other issues), because of it rather than Julia (inherently). I believe it’s easy to call Rust, I think no name-mangling issue at least, and you get the full benefit of Rust code libraries, though the benefits may not translate to the full system (as if all the code where written in Rust or with a borrow-checker).
It’s not too hard to call Julia even with it a dynamic and a GC-language, just not wanted by many because of the GC. I.e. that I recall the Zig claim, you want to call it, reuse code from a non-GC language. With StaticCompiler.jl you get that for a limited subset of Julia. It’s a hassle and limited, but could it get better?
I mean it’s easier to call from dynamic Mojo to static Mojo than it is from dynamic Python to static C++.