Reflections on Developer Experience After Building a Large Julia Codebase

I want to share my experience developing a Julia codebase for an project on high-performance simulation. It is from the experience of porting a ~80k line Python research code to Julia. Strictly speaking, it is not a “port” but a rewrite since I know the simulator code well enough and been writing idiomatic Julia code for some years.

My experience is not about issues with deploying Julia as in the Zipline + Julia talk but is inspired by it to share. This is purely about developer experience when building and maintaining a larger codebase.

What works well

Let me start with what I appreciate. Julia is great for researchers writing shorter scripts --magically, it just works. The performance, once you understand the model, is excellent. My current state: 70 seconds precompilation, 10 seconds using time, 20 seconds TTFX, and 1.2 seconds runtime (400ms of which is DiffEq). That runtime is really great for my domain. And, DiffEq.jl IS SO GOOD!

The REPL-focused workflow is not an issue for me. I worked around it using DaemonMode.jl + Revise.jl through a shell script, so I can do mytool run script.jl against a persistent daemon. I tend to think the arguments around REPL-focused workflow are somewhat misplaced. The real issue is TTFX, and once you set up the daemon, the workflow is fine.

Surface-level friction

The using/import design creates discoverability issues. using DifferentialEquations, for instance, is a wildcard import that brings in hundreds of symbols. Just by reading a script in VS Code, it’s unclear which symbols are available in scope. I enforced strict rules in my project on which symbols to explicitly import, but reading others’ code remains somewhat difficult. And since some symbols may have been @reexported, tracing back the definition of a struct can take time, and finding out a particular (but somewhat vague) methods on it can be harder.

The power of multiple dispatch combined with the lack of formal interfaces makes it hard to call functions correctly without repeatedly checking documentation. For example, how should one pass KLUFactorization() to QNDF()? IntelliSense gives no hint. The LSP also produces many false positives that I have to ignore.

Deeper friction: refactoring without static analysis

Refactoring code is difficult without good IDE support. I refactored my project from a submodule structure into a monorepo and spent enormous time fixing missing exports. (The monorepo refactor was itself a workaround for a VS Code Plugin issue that prevented IntelliSense from indexing the repo under development.) TestItemRunner also had an issue causing crashes due to ImportError after a few minutes of running (I submitted a PR with a fix that has since been merged).

Changing a struct definition or function signature, without static analysis, leads to error messages like “no method matching…(a long list of type parameters).” That error requires two cognitive steps to diagnose: which dispatch out of the many was intended, and which argument is problematic. My workaround has been to limit my use of multiple dispatch so that when code errors, I have only one function signature to consider. It is a conscious tradeoff for clarity. This works for my own code, but errors that propagate through libraries can still produce cryptic messages.

Performance friction: allocation and compilation

There is a constant fight against allocation and compilation time. To avoid allocation while supporting an evolving model library, I used NamedTuples extensively, which may have been a mistake. I am doing:

struct ResidualWorkspace{T1, T2, ...}
   var_addr::T1
   var_val::T2
   ...
end

struct SystemResiduals{T1, ...}
   ws::T1
   ...
end

sr = SystemResiduals((model1=ResidualWorkspace(...),
                      model2=ResidualWorkspace(...), ...), ...)

But compilation of methods specializing on sr became extremely slow. It’s like two minutes. Diagnosis showed the type parameter was 28 KB. I was unknowingly beating the compiler too hard. My current workaround is using Ref{Any} as a type barrier, but I honestly not sure if this level of internals are supposed to be used by application developers.

I also used closure patterns to capture concrete types and avoid type instability, but later learned that closures won’t be cached in PrecompileTools workflows. I had to switch to functors, which has been working fine.

Learning to live with it

The experience is challenging for those package developers who are willing to trade upfront effort for certainty (and performance). Allocation and type stability are difficult to reason about, and I often find myself in a cycle: small refactor → performance degradation → significant time isolating the issue → another refactor. I now have thousands of tests and have learned to set up regression tests for allocation in hot loops, which hopefully catches issues early.

Looking forward

This is just a post to share my experience, and I haven’t been able to publish my code. But I’m curious: what is the root cause of these friction points? Is there discussion of a Julia 2.0 that might be breaking but would improve developer experience – perhaps being more restrictive in some language features to make reasoning easier for both the LSP and developers? Those are on my wish list:

  • stricter interfaces or traits
  • optional static typing or signatures
  • compiler limits on specialization – and require the think more upfront
  • default on explicit using/import

Anyway, since my community is Python-heavy, I will probably end up packaging with PackageCompiler.jl. But I’d love to hear your perspectives and suggestions.

56 Likes

Some of that may be included here, and doesn’t require a 2.0:

3 Likes

I hope that this issue will be resolved by GitHub - aviatesk/JETLS.jl: A new language server for Julia, enabling modern, compiler-powered tooling.. Can you check if it really resolves this issue?

It is still a bit slow (which will change), though.

It is great to know that the strict mechanism is being implemented.

While waiting for it to be ready, I’m evaluating the pros and cons of porting to Rust.

1 Like

Thank you for the pointer! It has fewer false warnings (down from 300 to 200). The one stays is that it is not aware of @testitem from TestItemRunner.jl and keeps warning about Invalid redefinition of constant, while those constants are in isolated test scopes. Not sure where this issue should be filed, since scoping rules should have already been clearly defined by the language, but somehow TestItemRunner.jl has its own.

Edit: IntelliSense does seem to have improved with the new LSP!

2 Likes

I guess here ?

1 Like

We should be using more abstract types appropriately. We’re adding more and more static information to the whole SciML stack. We’ll keep improving. JET testing and trim testing is being added everywhere as well.

Note the docs have changed to all using import to hopefully improve this.

9 Likes

Thank you for your reply! Hopefully, tooling continue to improve and will make such refactor efforts more streamlined.

I didn’t notice the change in docs related to import. It definitely helps!

2 Likes

Yes, I think generally the language and the ecosystem is just becoming more and more static as time goes on, and that’s one of the big pushes we’re doing in the next year.

Also see ExplicitImports.jl which can help with testing the imports. We’re making it more standard across the repos, most solver repos have rolled it out but not yet.

14 Likes

Meta comment: This doesn’t seem to fit Offtopic since this very much is about Julia. Could someone with the privileges please move this to General Usage ?

4 Likes

A few questions that I didn’t see addressed in the post (though I may have missed them):

  • What motivated migrating the code over to Julia from Python? Was that motivation realized in the end product?
  • How would you compare performance between the Python and Julia variants?
  • How would you compare the usability of the two?
1 Like

For stricter interfaces/traits we have started using Interfaces.jl which works pretty well. It’s additional work to define and test the interface but it does help catch issues with implementations at test time instead of run time. It would be better if that could be worked out via static analysis in the tooling, but it’s still a big improvement overall.

I have the same general woes with the tooling in general. Especially the issues with cluttered namespaces in terms of discovery without having to go straight to the documentation (and often the source code).

2 Likes

You are right.. Let me address them.

Motivation: We got a project to evaluate the potential of GPU for a type of power system simulation in the timescale of milliseconds. Those problems are very large differential-algebraic equation systems comprised of hundreds of models, each instantiated thousands of times, and coupled through nonlinear algebraic equations. I want to investigate the upper limit of computational performance for such problems. A systematic approach, instead of rewriting some bottlenecks, is needed to be convincing.

Julia seems to be the right language: it has sparse matrix support, DiffEq solvers, GPU support, plus automatic differentiation. Equally important is that I am somewhat confident that the project is doable in Julia in a couple of years, but I am not confident to do it in Rust, which seems to require decent development in scientific computing infrastructure. Before starting the project, I implemented a prototype library. Julia’s ergonomics and performance was impressive.

The initial plan was to generate kernel code in Julia (since my Python does modeling in SymPy), so that I only need to write a thin wrapper for DiffEq. Later, architectural issues from the Python library surfaced, and I decided that a rewrite may be worthwhile. As of now, I haven’t gotten the GPU part done. Sparse Diff on GPU is.. not within reach without more research and implementation. But I would say, I now got a better platform for research and publication.

Performance: The new library is very promising. With a proper variable step integrator for stiff ODE, the simulation performance is probably 100x faster than my Python version, both in small test cases (e.g., for one with ~100 variables, which the Python version takes 1 sec to run and the Julia variant takes 10 ms) and large ones (e.g., another one with ~300k variables, Python’s is 1 hour + and Julia’s is 60 sec.). This is very impressive.

But it takes a a lot of efforts to get there, specifically, to get to near-zero allocation in hot loops with AD. The program is inherently hierarchical, and I as I mentioned in the post, I used NamedTuples a lot and still have JIT delays to deal with. I still don’t think, in a multi-layer program, I can reason the type interfaces well-enough to a minimize allocation. For me, it has to be done through profiling, isolation, and trials and errors.

Usability: I am struggling a bit with the usability, but not as much as the tooling. Currently, precompilation time is like 50 seconds, and JIT time is down to a decent 10 seconds (for a program with AD, multiple models, Select OrdinaryDiffEqBase solvers). But for most researchers who use small test cases, this is much longer than the total run time of the Python version.

Another uncertainty is the distribution. My community uses Python heavily (@ufechner7 , you may comment on this, and I welcome disagreement :D), Still thinking about attracting researchers to Julia. It should be distributed with Python binding (like diffeqpy or PySR) and a sysimage, I guess. At the end of the day, most of the researchers are graduate students, and Python skills are more marketable.

The Python version takes one line to install (pip install andes, and the CLI too andes is snappy). For scripting use, is more ergonomic in VS Code or Jupyter – dot + can help discover methods easily. I really hope @xgdgsc 's demo on auto completion will move forward.

A strong claim without reasoning: to make Julia more marketable, it has to be more distributable (short scripts should be quick to run, and large programs should be installable by one line), and that means becoming more static. BTW, why do we not have a pip and has to run julia, activate and install? We need more “Apps”.


Another usability issue for developers. I noticed that packages in Julia are quiet. Even with JULIA_DEBUG on, debug outputs are limited. This may have to do with the deliberate choice that debug loggings above @info are loud (and shown by default during precompilation with line info shown), so to reduce noise to end users, packages chose not to use them (related and still open, Provide a clean way to add custom LogLevels · Issue #33418 · JuliaLang/julia · GitHub). For example, I want to see intermediate Jacobian matrices for debugging a four-variable system, but DiffEq does not have those @debug, so I had to find the proper places to add print statements. The bugs are almost always in my user code, but more debug outputs can help.


Another issue in a Monorepo structure is Pkg.resolve(). When I add a package in a subrepo, the program gives a dependency error, which can be fixed by a Pkg.resolve() at the root repo. But Pkg.resolve() says no change in the top-level Manifest.toml file, which is absolutely correct and at the same time concealing, because it does modify the sub-repo’s Manifest.toml file. I don’t know, I didn’t dig deep enough to find the root cause, but explicit should be always preferred over implicit. Below is AI’s summary about this issue

1. You add a dependency to a sub-package (e.g., `SparseMatrixColorings` to `SubPkg/Project.toml`)
2. Precompilation fails with: "Package SubPkg does not have SparseMatrixColorings in its dependencies"
3. You run `Pkg.resolve()` from the root environment
4. It reports "No Changes" but the dependency issue is fixed

The message only reports changes to Project.toml and Manifest.toml files. It does **not** report:
- Internal dependency graph updates
- Resolution of dev'd package dependencies
- Cross-package dependency linking


In a monorepo with dev'd packages:
- The root Manifest.toml tracks all dependencies
- Each sub-package has its own Project.toml declaring its dependencies
- When you add a dep to a sub-package, the root Manifest may already have it
- `resolve` updates the internal linking without modifying files

This is documented in [Pkg.jl Issue #3066](https://github.com/JuliaLang/Pkg.jl/issues/3066):
4 Likes

It exists: GitHub - fredrikekre/jlpkg: A command line interface (CLI) for Pkg, Julia's package manager.

2 Likes

This sounds off. There certainly is an issue here but I’m fairly certain that it does change the top-level Manifest.toml (contrary to the message) and it does not modify any other Manifest.toml.

2 Likes

Taking this as written, you learned to manage environments in Julia but not in Python, and you’re not aware of how similar Python and Julia are here.

At the risk of explaining some things you already know, an environment specifies a set of packages and their versions. activate is how you specify an environment, and add (not install, I assume that’s a typo) is how you add packages to that environment. If you don’t really care to manage a custom environment, you could’ve skipped to adding to the Julia version’s default environment; a one-liner pip install does the same thing in Python by pip originating from a global installation. There are quite a few options for custom environments in Python, but to use the venv standard library as an example, you also have to activate for pip to originate from the specific environment instead.

While global environments are simpler and quicker, I do not believe there is any good reason to avoid managing custom environments. Most people work on multiple projects over their lifetime, and those projects usually do not have to share the same global environment. Actually, combining all their dependencies into one giant environment risks needless incompatibilities and blocked updates. People who use giant global environments also tend to share source code without a record of the environment at all, which is the biggest reason for such code to stop working after a few years when pip install grabs newer versions with incompatibilities or breaking changes. Traditionally, people would share their requirements.txt for reproducibility, whether or not they used a custom venv environment, but it has a lot of problems; for example, pip uninstall does not remove unused indirect dependencies. Python specified a more recent pyproject.toml for more modern tools like Poetry and PDM to handle these problems. Julia has basically been tackling the same problems with Project.toml and Pkg.

If you want to install packages in one line from the command line, all you have to do is write a more elaborate julia command with a -e input expression containing the relevant Pkg function calls, and possibly by activating the project first with -project. Note that -e would opt out of the REPL, so you’d have to opt back in with -i. The above jlpkg essentially wraps and specializes on that activity.

Apps wouldn’t simplify environments, and this might involve a fairly different way of containing and distributing code than source packages resolved in environments, something more similar to Python binary wheels. Julia artifacts have the same purpose, but compiling Julia code for them is still experimental and limited compared to other languages.

Hard to think of advice here because not much is known about your repo or how you’re adding or deving as you develop.

There isn’t really static analysis for finding a method that doesn’t match but should; the program doesn’t know your intent and probably went wrong because your intent wasn’t written into the program. This is just a fundamental drawback of a function having multiple methods scattered across any importing modules. That’s why many languages avoid multiple dispatch or function overloading by opting for 1 method per function; you would be familiar with how Python functions only have 1 method and are encapsulated in classes or modules. Polymorphism would occur through runtime types or modules specifying various function of a certain name (duck-typing), and even with that restriction, it can also sometimes be difficult to tell if the problem is the wrong runtime type/module/function or the rest of the call signature. Julia’s multiple dispatch doesn’t privilege a particular input, so this difficulty is multiplied.

Limiting methods for a function is actually a reasonable way to mitigate this. One method per function seems extreme, but it’s not unreasonable for many functions. “Multimethods” doesn’t demand you do ALL of your polymorphism through separate methods, oftentimes you want one method to take various input types and do different things, hence a “generic function”. I can’t possibly say what your project calls for, but it’s generally good to aim for most functions having a few very generic methods and depending on the other fewer functions with many method implementations for concrete input types. This pattern should be familiar in Python APIs, and it’s more formally specified in some statically typed languages like Rust.

2 Likes

Just dropping in to say: thank you so much for leaving this here. Feedback like that is an absolute gold mine. I think I may just implement some of the logic in our JetBrains IDE plugin so that the IDE can pick up on some other shortcoming. If you have specific ideas, drop me a line (either here, dm or github or so).

5 Likes

First of all, thank you for taking the time to respond!

I didn’t expect that single line of complaint on not having a pip equivalent to get a long response. I agree with all you said about environment; indeed, it may be helpful to many others. I also get the rationale for not use a jumbo environment.

My point on that “not having a CLI tool for managing environment” (I am wrong, jlpkg there you go) are:

  • managing environment INSIDE julia is not as integrated as other shell-based env manager.
  • Python’s virtual env decouples pyproject.toml from what’s actually installed in the environment, so that pyproject.toml’s edits are infrequent and controlled by the developer.

The first point should be clear. For the months on Python project X, a single mamba activate xx in .bashrc will work for all the terminals opened. For Julia, I need to remember to activate the proper environment a lot. To get this working, I can think of using startup.jl and activate my project being developed, then it gets to my second point.

With the project env activated by default, I run into issues of accidentally adding for-debug packages to my project. As I’m typing this, I realized that I can create a TestDrive environment my package, add my project being developed to it via dev, and then add for-debug packages in TestDrive, and then automatically activate TestDrive in startup.jl. Well, no again.. If I try to do activate SpareArrays in TestDrive, it will error because SpareArrays has yet to be add to TestDrive. I will then have to duplicate all dependencies from my package being developed. So that’s why the best practice, I believe many are following, is to install the for-debug packages (or the heavy ones) in base.

The difference between Julia’s Pkg and, say, venv, is that venv decouples pyproject.toml from what’s actually being installed in the virtual env. pyproject.toml is only checked when the user explicitly requests installation, but whatever installed will stay, and users can keep adding packages in the same virtual env (not base) without breaking. But in Julia, when developing my package (in VS Code, the environment is on by default), adding any package will keep modifying Project.toml, either desirable or not.

With that said, I accept the status quo.


I was more thinking about how to give user a command line tool that can be installed by one line and will run. All the env management should be done behind the scene.

I actually made a small app with juliac and find it very promising. Haven’t gotten to try my whole package beause of my weak Macbook Air :wink:


Didn’t want to do this, but, to complain responsibly… here’s the repo for reproducing the error: GitHub - cuihantao/pkg-resolve-monorepo-bug: Minimal reproduction: Pkg.resolve() from subpackage doesn't fix root environment

The error message is

> pwd
/tmp/TestMonorepo
> julia --project=. demo.jl
=== Step 1: Try to load TestMonorepo (will fail) ===
Info Given TestMonorepo was explicitly requested, output will be shown live 
ERROR: LoadError: ArgumentError: Package TestSub does not have Statistics in its dependencies:
- You may have a partially installed environment. Try `Pkg.instantiate()`
  to ensure all packages in the environment are installed.
- Or, if you have TestSub checked out for development and have
  added Statistics as a dependency but haven't updated your primary
  environment's manifest file, try `Pkg.resolve()`.
- Otherwise you may need to report an issue with TestSub
Stacktrace:
  [1] macro expansion
    @ ./loading.jl:2406 [inlined]
  [2] macro expansion
    @ ./lock.jl:376 [inlined]
  [3] __require(into::Module, mod::Symbol)
    @ Base ./loading.jl:2386
  [4] require(into::Module, mod::Symbol)
    @ Base ./loading.jl:2362
  [5] top-level scope
    @ /private/tmp/TestMonorepo/TestSub/src/TestSub.jl:6
  [6] include(mod::Module, _path::String)
    @ Base ./Base.jl:306
  [7] include_package_for_output(pkg::Base.PkgId, input::String, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt128}}, source::String)
    @ Base ./loading.jl:3024
  [8] top-level scope
    @ stdin:5
  [9] eval(m::Module, e::Any)
    @ Core ./boot.jl:489
 [10] include_string(mapexpr::typeof(identity), mod::Module, code::String, filename::String)
    @ Base ./loading.jl:2870
 [11] include_string
    @ ./loading.jl:2880 [inlined]
 [12] exec_options(opts::Base.JLOptions)
    @ Base ./client.jl:315
 [13] _start()
    @ Base ./client.jl:550
in expression starting at /private/tmp/TestMonorepo/TestSub/src/TestSub.jl:1
in expression starting at stdin:5
ERROR: LoadError: Failed to precompile TestSub [87654321-4321-4321-4321-cba987654321] to "/Users/hcui7/.julia/compiled/v1.12/TestSub/jl_GnmB8Y".
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:44
  [2] compilecache(pkg::Base.PkgId, path::String, internal_stderr::IO, internal_stdout::IO, keep_loaded_modules::Bool; flags::Cmd, cacheflags::Base.CacheFlags, reasons::Dict{String, Int64}, loadable_exts::Nothing)
    @ Base ./loading.jl:3311
  [3] (::Base.var"#__require_prelocked##0#__require_prelocked##1"{Base.PkgId, String, Dict{String, Int64}})()
    @ Base ./loading.jl:2679
  [4] mkpidlock(f::Base.var"#__require_prelocked##0#__require_prelocked##1"{Base.PkgId, String, Dict{String, Int64}}, at::String, pid::Int32; kwopts::@Kwargs{stale_age::Int64, wait::Bool})
    @ FileWatching.Pidfile ~/.julia/juliaup/julia-1.12.3+0.aarch64.apple.darwin14/share/julia/stdlib/v1.12/FileWatching/src/pidfile.jl:93
  [5] #mkpidlock#7
    @ ~/.julia/juliaup/julia-1.12.3+0.aarch64.apple.darwin14/share/julia/stdlib/v1.12/FileWatching/src/pidfile.jl:88 [inlined]
  [6] trymkpidlock(::Function, ::Vararg{Any}; kwargs::@Kwargs{stale_age::Int64})
    @ FileWatching.Pidfile ~/.julia/juliaup/julia-1.12.3+0.aarch64.apple.darwin14/share/julia/stdlib/v1.12/FileWatching/src/pidfile.jl:114
  [7] #invokelatest_gr#232
    @ ./reflection.jl:1297 [inlined]
  [8] invokelatest_gr
    @ ./reflection.jl:1289 [inlined]
  [9] maybe_cachefile_lock(f::Base.var"#__require_prelocked##0#__require_prelocked##1"{Base.PkgId, String, Dict{String, Int64}}, pkg::Base.PkgId, srcpath::String; stale_age::Int64)
    @ Base ./loading.jl:3882
 [10] maybe_cachefile_lock
    @ ./loading.jl:3879 [inlined]
 [11] __require_prelocked(pkg::Base.PkgId, env::String)
    @ Base ./loading.jl:2665
 [12] _require_prelocked(uuidkey::Base.PkgId, env::String)
    @ Base ./loading.jl:2493
 [13] macro expansion
    @ ./loading.jl:2421 [inlined]
 [14] macro expansion
    @ ./lock.jl:376 [inlined]
 [15] __require(into::Module, mod::Symbol)
    @ Base ./loading.jl:2386
 [16] require(into::Module, mod::Symbol)
    @ Base ./loading.jl:2362
 [17] top-level scope
    @ /private/tmp/TestMonorepo/src/TestMonorepo.jl:3
 [18] include(mod::Module, _path::String)
    @ Base ./Base.jl:306
 [19] include_package_for_output(pkg::Base.PkgId, input::String, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt128}}, source::Nothing)
    @ Base ./loading.jl:3024
 [20] top-level scope
    @ stdin:5
 [21] eval(m::Module, e::Any)
    @ Core ./boot.jl:489
 [22] include_string(mapexpr::typeof(identity), mod::Module, code::String, filename::String)
    @ Base ./loading.jl:2870
 [23] include_string
    @ ./loading.jl:2880 [inlined]
 [24] exec_options(opts::Base.JLOptions)
    @ Base ./client.jl:315
 [25] _start()
    @ Base ./client.jl:550
in expression starting at /private/tmp/TestMonorepo/src/TestMonorepo.jl:1
in expression starting at stdin:5
  ✗ TestSub
  ✗ TestMonorepo
Precompiling TestMonorepo finished.
  0 dependencies successfully precompiled in 2 seconds. 1 already precompiled.

FAILED: The following 2 direct dependencies failed to precompile:

TestSub 

Failed to p...

=== Step 2: Run Pkg.resolve() ===
     Project No packages added to or removed from `/private/tmp/TestMonorepo/Project.toml`
    Manifest No packages added to or removed from `/private/tmp/TestMonorepo/Manifest.toml`

=== Step 3: Try again (should work) ===
SUCCESS: TestMonorepo.TestSub.compute_mean([1,2,3]) = 2.0
/tmp/TestMonorepo main *4 !1 >                                                                                                                                                              3s py aa 20:44:55

Note the line

     Project No packages added to or removed from `/private/tmp/TestMonorepo/Project.toml`
    Manifest No packages added to or removed from `/private/tmp/TestMonorepo/Manifest.toml`

But somehow it edited Manifest.toml in the root repo:

> git diff | cat
diff --git a/Manifest.toml b/Manifest.toml
index 5f38a11..2b11e4e 100644
--- a/Manifest.toml
+++ b/Manifest.toml
@@ -46,6 +46,7 @@ uuid = "12345678-1234-1234-1234-123456789abc"
 version = "0.1.0"
 
 [[deps.TestSub]]
+deps = ["Statistics"]
 path = "TestSub"
 uuid = "87654321-4321-4321-4321-cba987654321"
 version = "0.1.0"

That makes sense. The whole point of the post is to start a discussion for improving the experience for library developers, so that we can keep and grow. I had a great pleasure coding 2,000 lines in Julia (by hand), a bit of a hard time coding 20,000 lines (with the help of AI) due to tooling and language features, and I would be sad if I stop before getting to 200,000 lines.

I can see Rust gaining momentum even though the language is hard. Developers are more certain that they just need to get it right for once, and static checking and and compiler messages are useful. Anyway. I understand the pros and cons of multiple dispatch vs traits vs OOP, but the question is how we can make developer experience better with the paradigm we have.

1 Like

You are right. It DID change the top-level Manifest.toml, but the message says otherwise. I posted a repo in the reply above.

Thank you! I am aware of Interface.jl, but I am hesitant to use any non-official macro because it might change. For example

@interface AnimalInterface Animal components description

It’s really hard to tell that Animal components description is a string instead of three variables.

But that’s my problem. I implemented a soft version of Interface.jl and will warn about interface violation in my test. I’m hoping that the main language will go in that direction. An official trait support will be wonderful.

1 Like