This month in Julia world - 2025-08

A monthly newsletter, mostly on julia internals, digestible for casual observers. A biased, incomplete, editorialized list of what a clique of us found interesting this month, with contributions from the community.

For recent news on Julia and interesting conversations with community members, check out the JuliaDispatch podcast (on many platforms, including youtube and spotify). Highlights from the newsletter get discussed (with more context) during some episodes.

JuliaCon 2026 will happen from August 10th to August 15th 2026 in Germany.

“Internals” Fora and Core Repos (Slack/Zulip/Discourse/Github):

  • A fun little example of how to properly “cast” from one type to another type in Julia. Calling Int(x) is not guaranteed to convert the instance x of some arbitrary type into an Int. Of course, only a sociopath would create a method Int(::MyEvilType) that does not return an Int, but nonetheless, the trick to protect yourself from such evil is to add a type assert like Int(x)::Int – now instead of being at risk of bad behavior and stack overflows due to poorly written user code, you get an early type assertion error. See this being recently suggested as a guardrail in some low level Julia code in #59506.
  • JuliaC, the ahead-of-time julia compiler is now living in its own repo and will be available as a Julia app.
  • --trace-compile is a very useful flag to learn why your package loads slowly and what precompile statements to add. There is an interesting interplay between type inference and the compiler, where the compiler might need to run a function (with actual runtime dispatch) in order to learn something about it – an event that does not happen during actual use of the compiled code. This can cause unnecessary precompile statements to be logged. PR #59366 marks such precompile statements, but consider perusing the comments in that PR to learn more about these intricacies.
  • Since early this year, Sam has been contributing significant improvements to Julia ergonomics. Looking from the sidelines, it seems like they hit the ground running with their first ever contribution (my favorite): a drastic simplification of the REPL auto-completion implementation from a few months ago. Now they have a PR on significantly reducing memory pressure during heavy compilation loads (and another reduction here), and on compressing the sysimage (cache of compiled code) which drastically reduces its size (leading to a ~70% size reduction for untrimmed apps produced by PackageCompiler).
  • They are also contributing a much faster way to enable Memory/Thread/AddressSanitizer compile passes in Julia. These tools can be used to detect and debug common safety, correctness, and performance issues.
  • Who knew that segfaults can be “fake” triggered by a signal, as opposed to by… an actual segmentation fault due to accessing memory you do not own? PR #59275 will warn about such segfaults, hopefully simplifying the debugging of messy nesting of runtimes from different languages.
  • Interesting discussion on the word “functor” and its minor historic use in the julia docs.
  • A few months ago Julia gained the capability to redefine structs (not just methods). Due to how this interacts with worldage (the mechanism for tracking what is the most recent definition of methods and structs), subtle changes to the @testset and @allocated macros was needed. Short version is that @allocated has always been a bit of a heuristic test and there are better alternatives to it if you are going to test for allocations in your package’s tests. See comments in #58780 for details. Also #58057 and #59278.
  • A common performance pitfall in julia is iterating over diversely typed array where the innermost part of the loop can not be type stable, leading to runtime dispatch of which function to call (basically a dictionary lookup for the appropriate function to call). Frequently discussed solutions are Algebraic Data Types (ADT), as implemented in Moshi.jl, LightSumTypes.jl, or SumTypes.jl. The Julia compiler also does the extra work to “union split” if the diversely typed array does not contain instances of “too many” different types. Here is a neat manual example of how to solve this type of problem without relying on general frameworks like ADTs: Lilith sped up the hashing of julia abstract syntax trees by implementing a manual union split (tree structures are another place that frequently has this type of issues).
  • An in-depth high-quality discussion by Jakob on when and why you should use ScopedValues (a better nestable alternative of global variables) vs task-local storage (dedicated semi-temporary storage within a task, simplifying multithreading algorithms).
  • ScopedThunk now permits you to take a snapshot of the state of scoped values (a more structured way to deal with global variables defining a context) and use that snapshot at a later time.
  • LazyScopedValue can be used to create a scoped value which lazily computes the actual value the first time it is requested, thus supporting reading the initial state of a scoped value from the environment.
  • TestSets now use ScopedValues for storing results, showcasing the simplicity ScopedValues provide when dealing with multiple tasks reading and writing in global storage.
  • Julia uses the general purpose platform-independent high-performance rapidhash algorithm in much of its hashing utilities. There are recent improvements to internals related to it in PRs #59177 and #59185.
  • Much of the precompile logic in the julia compiler was moved from C code to a pure-julia implementation.
  • Now you can send a signal to the Julia runtime to trigger --trace-compile at an arbitrary time without having requested it at the start of the process. Just send SIGUSR1/SIGINFO1 (ctrl+T on many systems).
  • [Iterators.findeach](https://github.com/JuliaLang/julia/pull/54124) is a lazy version of findall that avoids allocating an array.
  • Another one of Lilith’s surprising drive-by performance optimization – an order of magnitude faster prod(::AbstractArray{BigInt}) by more intelligently pre-allocating buffers for BigInt instances and traversing the array in a more efficient order. In #59456.
  • Neven has been working for quite a while on simplifying unnecessarily complicated method dispatch in Base (which also lowers the potential for method invalidation). See “eliminate some nongeneric methods of length and size”, “generic size: avoid method static parameters and abstract type assert”, “avoid method proliferation for Tuple functions”, and many more.
  • Work on making @threads work on array comprehensions.
  • A fun question on weird type queries with UnionAll.

In search of contributors and new maintainers (specify novice/moderate/expert and internals/domain background necessary):

Ecosystem Fora, Maintenance, and Colab Promises (Slack/Zulip/Discourse/Github):

Numerical Math ecosystem:

Mathematical Optimization ecosystem:

Autodiff ecosystem:

Notes from other ecosystems:

  • A discussion about monomorphisation (the process of creating brand new specialized compiled function behind the scenes for each concrete type your runtime might face, i.e. one of the main features that makes julia fast), its relationship to boxed types, and its great compilation time cost, if it was to be added to OCaml.
  • A great primer on how GPUs work and how to think about them from the JAX community.

See also: French community newsletter, community calendar, Turing.jl newsletter

Please feel free to post below with your own interesting finds, or in-depth explanations, or questions about these developments.

If you would like to help with the draft for next month, please drop your short, well formatted, linked notes in this shared document. Some of it might survive by the time of posting

34 Likes

(post deleted by author)

1 Like