I find it hard to develop in Julia

Julia is awesome to use, absolutely. But as an application developer, Julia can be frustrating. I wanted to write a post to outline my view on this. I write this post in a subjective tone because I want to share how I feel, because I think that might be useful to the community. My hope is to make my voice heard to Julia’s decision-makers, and to hear from other developers.

In short, I find Julia an unstable and difficult platform to develop for. Precompilation is slow, developer tooling is lacking, and with every Julia release, I fear that my packages will stop working (they often do). The maintenance burden from compatibility work is taking a toll on me – to the point where I am demotivated from starting new Julia projects.

New releases are scary

Julia makes too many breaking changes. Some minor releases have cost me hundreds of painful hours of compatibility work. With every Julia release, I fear that my project might be over, and I wish that the release gets delayed as long as possible. That’s not how I want to feel about my favourite language.

Julia regularly makes big internal changes in minor releases that break (important) packages. There is testing infrastructure to prevent this (PkgEval), but my experience is that PkgEval is not always called when needed. This means that package breakages are missed. And it seems that once your package (or a dependency) breaks, all future PkgEval runs will skip your package.

It’s up to me, the package developer, to test my package with the Julia prereleases (alpha, beta). This feels unfair: I put my trust in an ecosystem – spending my time developing a (popular) package – but now I have to spend time debugging, reporting and working around Julia compatibility issues, with no migration guide. And the end result? If you’re unlucky (like me), a slightly worse experience for my users with higher startup times and a higher memory footprint.

Other languages

I regularly work in both Julia and (trigger warning) JavaScript. Working in JavaScript is more peaceful because of its amazing developer tooling, but stability stands out the most. I can write a large, complex codebase, and I know that it will keep working 10 years from now with little or no maintenance. Packages might change, but the language runtimes (browsers) are extremely stable – my code will keep doing the same thing. The web ecosystem makes exciting developments, but not at the cost of my current work. For this reason, we now usually write new PlutoJL features in JavaScript instead of Julia, to keep our project sustainable. When Julia code breaks, we replace it with JavaScript if possible.

I need more hope

When I struggle with the developer experience of Julia, I am sometimes left feeling like this was somehow my fault. I used the API wrong (but there was no clear documentation). I used internal API (because there was no public API). Precompilation has been improved (but my package starts slower). The change is “not a bug” (but it still broke my package). In these interactions, I feel left behind.

My packages break in the name of progress, but progress towards what? Julia shouldn’t be a platform for compiler experiments. I would be happy to accomodate internal changes if they improve the developer experience. We still need reliable and easy-to-use [Ctrl+C interrupt, static type checking, breakpoint debugging, profiling, friendly error messages, WebAssembly]. Because I think that Julia is great as it is. Let’s just make it easier and get people to use it! In my eyes, Julia’s main issue is lack of adoption – not lack of a better memory layout, lack of compiled use, etc. Stability and developer experience is key here.

That’s it! Does this resonate with you, or is your experience different? I’m curious to hear!

96 Likes

I see where you’re coming from, but to be fair I think you’re in a very unique position to have Julia releases be so breaking.
As far as I can tell, this is due to a deep integration with semi-private APIs in Julia, concerning Pkg, expressions and eval.
Makie, with around 150 dependencies and a huge amount of Julia code usually has a one line change which breaks on new releases, which is usually fixed by outside contributors before the new Julia version even gets released - the 150 dependencies also seem to be magically fine on every Julia release. This isn’t because I’m a genius or anything, this is mainly because Makie and its dependencies don’t need to call into less stable Julia APIs.
The way Pluto creates a completely new way to do package management seems to be in a fight with the standard Pkg API almost by design.
Same goes for the expression parsing for figuring out the variable assignments I suppose? I’ve created similar packages, which were “hacking” Julia by calling into private APIs or relying on behavior which wasn’t guaranteed, which I had to abandon after releases which had too many “breaking” changes - which weren’t considered breaking by the developers, which I think was true by how the stable API of Julia is defined - still super annoying of course.
I really see only one way forward to improve the situation, which is to make what you need into a public stable Julia API - which, admittedly, may be even more work than fixing the breaking changes on release. I guess a thorough review of what can be moved to a more stable API could also help, but I guess that has been tried or is at odds with the features you want to offer.

To put things into perspective: I think I have never spend more than 10 minutes on adapting Makie for a new Julia version in the last 6 years or so as far as I can remember. On other tiny packages, which where heavily doing things “on the edge”, I might have spend days on an upgrade.

53 Likes

Totally agree. Human and financial resources around Julia seem to always target an advanced audience (e.g., PhDs doing SciML stuff); and IMHO this is not the audience that the language should be focusing on at this point in time.

29 Likes

There’s some things here I really agree with, but also some things here that I think land a bit off with me.

When you do use internal APIs it becomes your responsibility to deal with that when those internal APIs inevitably change. I get that this is frustrating though, and I wish the julia devs were better about at least communicating when they’re changing internal APIs, so that people who have no choice but to use them can at least adapt.

IMO the most reasonable thing you can do is identify internal APIs you’re using, and then enter into conversations with the julia devs about why and how the public APIs just aren’t enough for you, and try and figure out together if a public, stable API can be made that fits your use-case. This won’t solve your problems over night, but hopefully it can eventually get Pluto on a more stable foundation of public APIs.


For better or worse, Julia language development happens primarily in response to the acute needs of developers. If someone is working on something and encounters a problem, they work on a change to the language to fix it.

Pluto is a project that had a lot of interest from developers early on, but various design choices and communications from you (which of course you are completely within your rights to make), have made it clear that you dont see pluto as a tool for developers, but as an educational tool, and if it doesn’t fit the use-cases of developers who are interested in it, then that’s just too bad, go use something else.

While this is a totally reasonable decision to make, it does have the side-effect that it makes it so that the people who tend to develop the language are not paying much attention to Pluto and don’t feel they have a personal stake in helping make public-facing APIs for Pluto to use or to add new language features that Pluto is developing, because it doesn’t actually help the cases they’re working on.


Regarding this:

I’m a little bewhildered. Why shouldn’t julia be a platform for compiler experiments? It literally always has been essentially a compiler experiment, and has always been attractive to people interested in using compiler experiments to solve hard problems. While I get that maybe the changes to the language lately haven’t been particularly helpful to you, there’s been a lot of stuff that have been tremendously helpful to other domains or applications.

Likewise, other people might not care at all about the stuff you list as very important things missing from the developer experience (though most would agree they’re certainly nice to have!).

24 Likes

About the interaction with Pkg specifically, we also had a non-fun experience in BinaryBuilder.jl, where we’ve been stuck for years to Julia v1.7 (yes, that version released in 2021), because BinaryBuilder.jl taps into Pkg internals, which are intentionally non-stable. I’m impressed Pluto.jl managed somehow to keep up with newer Julia versions :smiley:

However we eventually had a much better experience lately when, instead of fighting every single version of Pkg, we had a discussion with the Pkg developers to have a set of tests in Pkg representing the BinaryBuilder.jl’s Pkg workload, after getting help from them to clean up the use of the Pkg API inside BinaryBuilder.jl. The result is that now BinaryBuilder.jl should work with Julia v1.12, v1.13, v1.14, in addition to v1.7, four minor versions supported at the same time is a record! And the integration tests inside Pkg should help with better future support, with fewer fights involved along the way.

53 Likes

Lack of AOT compilation directly leads to lack of adoption from people that need it for their use case. I’m glad this was eventually addressed.

9 Likes

Julia updates can feel very positive (e.g., 1.10 was a great release all around), but they can indeed feel frustrating…

Breaking Changes
Actually, most of Julia minor releases are breaking in practice! This is not my subjective feeling, but I checked the situation a few years ago. Basically, load the General registry state as it was at Julia 1.x release and try instantiating a nontrivial environment on Julia 1.(x+1). Sometimes it doesn’t even instantiate, sometimes instantiates but doesn’t work.
Typically, breakage is indeed localized and not too hard to fix – but it’s unambiguously breaking.

Performance Regressions
of two kinds: precompilation time has been growing steadily over the past years with no sign of reversing. For UX reasons, I personally continue using 1.10 – and worry about the time when it becomes unsupported and one will pay 2-3x for TTFX.
Another kind of performance regression is inference not inferring the type sometimes, or type-unstable code becoming slower, etc – this does happen quite often as well.


All of these issues were discussed already, and they are generally recognized – the question is mostly in the priorities.

9 Likes

At work, I definitely struggle with Julia upgrades due to undocumented breaking changes.

The most recent example: in Julia 1.12, apparently the way command macros worked changed, which broke SQLStrings.jl, which LibPQ.jl (the most popular Julia postgres library) relies on. That’s not a huge deal in itself, but I didn’t see this mentioned anywhere in the release notes or highlights. If there’s a place that tracks breaking changes, that would be great to know before upgrading.

8 Likes

As Julia matures, maybe the community as well as core devs can eventually settle on the LTS as the main recommended version.

5 Likes

I consider this a “good” thing because Julia’s a pretty odd language (does any other language do anything like world age?) and there were many deep important things that hadn’t landed by v1.0 that needed development or clarifications, sometimes just as odd. Native code in package images (v1.9+) was important, but it forced people to stop some manual-discouraged behaviors in packages. Involving constants (such as type definitions) in world age (v1.12) was important, but it forced people to stop eval-ing new global constants and accessing them dynamically inside the same method (the warning was insidiously silent if you access a new function name, often to be returned). And who can forget the Return of the Soft Scope in v1.5? For now, upper-bounding versions in a project and saving the manifest is the only guarantee of reproducibility.

I think a lot of users have ideas on how a REALLY breaking v2 can handle all these changes better (I personally would prefer a visible, unique declaration in a home scope over dealing with the global/local and hard/soft rules on whether variables are reassigned or new), but that’s definitely not worth doing until Julia’s use cases settle more. JuliaC is still experimental, so it’s a pretty bad time.

Never wrote a command macro before, what change should I know about? (If it’s easier, could just add a link for me to read).

As a user and a application developer, my view is that the core devs are prioritizing the stability of public API by means of internal API and implementation changes. This is fine, but the issue (again, in my view) is that there is still no formal, language-wide contracts (e.g., traits or interfaces) for public APIs. The lack of formal contracts blurs the distinction between public and private API, since public API is just an unenforced social contract. It also makes the detection of contract violation difficult. I would imagine that LSP need to implement its own algorithms to detect issues, which is also tough.

I would argue that we need more breaking changes at the surface level of the language to be more strict and formal. That will certainly cause short-term pains but eventually improve the experience for developers.

7 Likes

+1 for this. I don’t understand the need for such complexities.

1 Like

In my humble opinion, Pluto doesn’t need a custom package manager. I’ve been actively avoiding that package manager because it just dumps literal hundreds of lines of package info (the contents of Project.toml and Manifest.toml) straight into the notebook file, thus making it absolutely massive. When I want to share Julia code online, I simply do this at the top of the code:

begin
 import Pkg; Pkg.activate(temp=true)
 Pkg.add([
   (; name="Distributions", version="0.25.123"),
   (; name="PyFormattedStrings", version="0.1.13"),
   # ...and so on...
 ])
 Pkg.status()
end

The Pkg.add call outputs tons of extremely verbose text, but I think there’s a way to redirect it to /dev/null or otherwise silence it. The Pkg.status() call is there to show what’s in Project.toml.

This just works for me: all package information is written out explicitly (package names, exact versions, etc), it uses the built-in Pkg.jl, it’s just regular Julia code. It works fine in Pluto.

Can’t Pluto create a new cell with this block of code automatically? I guess it’s not that simple because Pkg doesn’t support dry runs: Dry run of adding a package.

Just imagining some convenient API like Pkg.try_add that basically does everything Pkg.add does but does not modify the existing environment and instead returns the would-be contents of Project.toml as a vector of Pkg.PackageSpec:

function on_new_input_using_statement(packages::AbstractVector{<:AbstractString})
  package_specs::Vector{Pkg.PackageSpec} = Pkg.try_add(packages)
  for spec in package_specs
    # add the package to the Pkg block in the notebook if it's not already there
  end
end
4 Likes

I’m also sometimes a bit torn in how I feel about Julia’s general development. It seems to me that it’s largely driven by the commercial work that can be built on top of the SciML ecosystem or other numerical and optimization libraries. For these, the run time usually outweighs startup cost so heavily that raising the performance ceiling is given priority over making the system responsive and snappy. Hence, you get tools like JuliaC, AllocCheck which are mostly useful for these cases where your program can be type-inferred completely. This is almost never the case for the code I write.

I work on things like Makie, AlgebraOfGraphics, QuartoNotebookRunner, SummaryTables. These are utility packages by and large. Often they contain a lot of functionality, only a bit of which you’re going to need at any given time. I want them to be snappy, not maximally quick after compiling for a long time. But snappiness has to be fought for by hunting invalidations (which isn’t easy by itself and can be destroyed by dependencies or compiler changes), tricking the compiler to make it not optimize code to death that really doesn’t need it, ripping out all the nice broadcasting syntax where it just takes much longer to compile than map or similar. All of this activity feels kind of useless because the rules can change under me at any time. The compiler heuristics are not really geared towards this kind of code. You can see it in basic things like how keyword arguments are implemented, you get NamedTuples under the hood so that “bag of keyword” interfaces like plotting tools often use are tough to compile if you’re not careful, and recompile large chunks with different types if you’re not careful.

For a longer time, my main hope has been that Tim Holy has stated improvements on Julia’s lowering mechanism could bring one or more order of magnitude improvements to the speed at which Julia can be interpreted. So I’m watching JuliaLowering’s development, even though I don’t understand how the interpreter story would be able to change. Still, I’m certain that a really large amount of the code I write would benefit from being fully interpreted, it just doesn’t matter if I can write the second SummaryTable a couple milliseconds faster if I can instead write the first one after load instantly. Most people here are happy with Python for lots of their work after all. I think a ton of utility code like that would make the Julia tooling experience snappier. And then we could have islands in our code where we say, sure, compile this down to the best of your ability. But it wouldn’t be everything with huge precompile files, minutes of precompilation, still so many methods missed, still so much invalidated code to recompile. The packages that are basically static code could still do their thing. But I think that’s the core numerical software, not all the utilities that make coding in Julia actually fun.

32 Likes

One bit of hope is the new the new syntax versioning for Julia v1.14. It’s just one flavor of possible breakage, but it’s definitely a big step towards a more stable future.

Now, that said, I could still see Pluto having unique challenges because its users will want to themselves opt into whatever new syntaxes Julia v1.15 brings… and those changes may break the way that Pluto tracked assignments in 1.14.

11 Likes

We should not underestimate the importance of a smooth teaching experience for Julia. After all, adoption by companies etc. very much depends on the “supply” of well eductated Julia enthusiasts who come out of university.

In my experience, Pluto very much is able to deliver this experience - once the early times had been over, providing lecture content in form of Pluto notebooks worked for me without hiccups with regard to package versions, different operating systems etc., and with positive feedback from the students. Pluto’s package management is an important part of making this experience possible.

In the moment, indeed, I see precompilation time as the most critical point in this respect. The other thing we should keep in mind that Pluto inspired marimo, it’s Python based competitor which now can be deployed with zero installation effort in the browser based on wasm.

That said, in shared projects I use Pluto notebooks which work in the project environment and thus disable the Pluto package manager. But this use addressed to users already experienced in Julia.

25 Likes

But that makes your notebook unreproducible, because there are dependencies of dependencies of dependencies and you clearly do not specify all of them. There is a reason why Project.toml isn’t enough to make the notebook reproducible.

9 Likes

I strongly agree with this statement. From the perspective of developers maintaining larger codebases or ecosystems, Julia’s tooling can feel quite frustrating.

Error messages are often overwhelming, debugging support is limited, and static or semi-static type checking is not very accessible. While powerful tools like Cthulhu exist, they are difficult to use in practice and not something most developers can rely on day to day. In contrast, I’ve recently been writing more Python, and tools like Pydantic provide fast, clear, and actionable type feedback. Even though Python’s type system is not “real” in the same sense as Julia’s, it is extremely useful in practice. Julia may have a stronger underlying type system, but the way type inference works relies on compiler heuristics that are intentionally underspecified. As a user, that often makes the behavior hard to reason about and difficult to rely on during development.

Another concrete issue I’m currently facing is that tests across multiple packages are failing due to new allocations introduced in a minor Julia release. These tests relied on @allocated to assert allocation-free code paths. After the update, CI started failing. One can argue that @allocated is not the recommended approach but then what is the recommended approach? More importantly, why does code that previously did not allocate now do so? (the failing CI). After fixing those another tests started to fail in a different code path. This appears to be a subtle interaction between @test and @allocated, but there is no straightforward way to diagnose it. Without effective debugging or profiling tooling, the cause of such allocations becomes a black box.

By comparison, tooling in ecosystems like Python and JavaScript is often excellent. These languages may not offer the same performance or advanced compiler optimizations, but development in them often feels nice because the tools actively help you understand what’s going wrong. Julia can feel great when everything works as expected, but development is precisely the phase where things don’t work yet. That’s when good tooling matters most. At the moment, I often hesitate to add new features, knowing that if something goes wrong whether it’s a bug or a performance regression and I may spend hours tracking it down due to limited tooling support.

The situation is compounded by ecosystem instability. The VS Code extension frequently breaks across versions, and running tests can be frustrating. At one point between Julia 1.11 and 1.12, the LanguageServer would crash when an empty line was added inside an @testitem, triggering full test re-precompilation. More recently, Julia 1.12 broke Revise, particularly for struct revisions. Given how essential Revise is to productive Julia development, this is a serious issue. At this point, Julia without Revise is barely usable for iterative development. That raises the question of whether such a critical tool should really live outside the core language, especially if it can remain partially broken for extended periods.

Overall, this does not feel pleasant as a developer. It’s frustrating, and I strongly relate to the original post’s concerns. Julia has enormous strengths, but I hope future effort prioritizes robust, reliable developer tooling at least as much as new compiler optimizations. Without that, the cost of development remains unnecessarily high.

26 Likes

For me a big issue is that many core parts of the language are currently not documented, or may be documented at a surface level but not documented in a way that allows for code to be written that will reliably work. Despite this, many packages rely on implicit assumptions about how these things work, and importantly those assumptions may be conflicting. For example a PR I made to fix a silent use after free style bug with InlineStrings.jl broke a number of other packages Implement `cconvert` for `SubArray` by nhz2 · Pull Request #60533 · JuliaLang/julia · GitHub I am hopeful though because (maybe just my personal experience) I feel that the contributor experience for making changes to julia or standard librarys has gotten way better recently.

5 Likes

I guess a big question for me is where, concretely, donations might help with agenda like “document breaking changes more clearly” or “improve documentation”? I understand there’s always more work than people available to do it. We were able to make some donations last year, and hopefully will make some more in the end of 2026.

6 Likes