What don't you like about Julia for "serious work"?

Well, this is exactly what I was talking about! There was a long thread where it was explained that multiple dispatch is a way to solve expression problem. At the same time, expression problem is a general problem of programming, which has absolutely zero connection to scientific computing. Of course, there are scientific computing problem which can be easier solved in multiple dispatch paradigm, but you can say the same of all other fields of programming! But still, for some (weird for me) reason, Julia and multiple dispatch is still marketed as a “good fit for mathematical programming”. To be honest, I wouldn’t mind to use any other language, which has multiple dispatch as it’s central feature, even if it’ll be something Go/Rust-like.

Regarding other points, in my world (which is of course can differ from other points of view), serious programming do not mean small bash/python scripts, it’s a multiple middle/backend servers, which should work with multiple concurrent users, so time to first plot is non-existent. At the same time high performance is highly needed, because each extra millisecond of calculations is multiplied by the number of requests and in the end it transforms to extra costs when you need to buy additional servers. In this regard high performance is not something purely scientific programming related.

Now, Julia gives this unique ability to have variable performance, i.e. when you can choose whether your code should be short or when you need to get all extra speed. Consider for example, that you have dataframe where one of the column has date type and you need to pivot this table over months. If it is not critical, you can use usual stack/unstack commands, but if performance is needed you can take into account fact that there are only 12 months in a year and write very performant implementation of such a pivot. This is something which just do not exist in other languages: R/python do not give this level of control and in C it will be so complicated.

So, I’ll reiterate, Julia has features (multiple dispatch, variable performance) which is unique and can give it an edge over other languages, but since all of those features are marketed as scientific, it evolve very slow in other fields of programming.


A lot of the “static interfaces” discussion is about early error checking. For me another useful part of having type annotations written into the source code is that it’s part of the documentation. In an ecosystem with 100% static annotations, if I want to know how to call a function, I can often just read the signature (plus the documentation).

Currently the common advice for library authors to avoid type annotations (to make your functions more generic) is in tension with communicating to users inline how to use the code. In many cases, library documentation can be fairly sparse and type signatures aren’t provided. I have to guess what the function takes, write a test, and run the test to failure (or maybe a typechecker will warn me). It would be nice if I could just get it right the first time by reading the interface from the signature in the source code.

I don’t know what changes would encourage library authors to put explicit interfaces in their signatures – or if the IDE could infer the signature and tell me about it.

It would be interesting to track a metric of how many parameters are annotated in published libraries.


I believe that for some functions it is really hard to write down the correct type annotations shortly (think about zeros) because they are very generic.

In my opinion the fault is at the programmers side. Providing two or three minimal examples and writing a good function documentation is in Julia easy und convenient to access. I always try to document even small helper functions with examples so that readers understand how they work.


Coming onto this thread late, but just to stir the pot. I think what prohibits more mainstream adoption in an industry like aerospace-defense, is not only the development time issue but the embeddability/repeatability/testability/safety issues that need to be verified and validated.

I need a language that can do everything Julia can do but also compile the algorithms and state logic to embedded SW/HW. (3+ language problem)


  • Development and collaboration may be faster than C/C++
  • Good for accelerating simulation execution and development schedule


  • Need to get algorithms into embedded HW/SW with the same ease of development
  • Package libraries rely on internet connectivity for pain-free operation.
  • Need air-gap-able secure solutions.

Unanswered Questions:

  • How do I cross compile to a RTOS or baremetal embedded application?
  • How can I target an FPGA based SOC?
  • How do I check for safety issues like with some of the C standards like MISRA?

It seems like once I’ve defined the application, I should be able to get Julia to transform that code into a stable/static variant and test it. Do I just need to have Julia spit out C-code?


Keep an eye on https://github.com/tshort/StaticCompiler.jl


I think adoption of Julia is safety-critical systems like aerospace-defense, might be far off (maybe mostly for cultural reasons), for anything other than (non-RTOS) Linux and regular hardware. Julia HAS been shown to work for hard-realtime, and preferred over C++ or Java, despite it’s GC. Which is interesting as Java has a hard-real-time GC option, and C++ has no GC (people often want to avoid GC). In Julia you want to avoid the GC, which you can.

Linux has an real-time option, but otherwise none of the RTOSes are supported with Julia (might be a long way off?).

StaticCompiler.jl has many limitations, e.g. I see in the code for compile:

Warning: this will fail on programs that have dynamic dispatch

You can’t target FPGAs, currently, or any time soon? Unless you have something very different in mind than I do. I believe there are C-to-FPGA compilers, that work (with some restrictions). Even if you would get Julia-to-C compilers (Intel had that and/or to (partially compiling to) C++ with ParallelAccelerator.jl, both outdated), then I’m not sure it would help you since it would mean C code, for a C runtime with garbage collection as an addon.

Julia doesn’t support baremetal, but consider these projects to make it so from Justine Tunney.

Note, all her projects are very intriguing (she’s one of my favorite programmers, up there with Jeff, Keno and the rest, and Carmack and Linus), starting with the first two relevant to that:

project called Cosmopolitan which implements the αcτµαlly pδrταblε εxεcµταblε format.
In the above one-liner, we’ve basically reconfigured the stock compiler on Linux so it outputs binaries that’ll run on MacOS, Windows, FreeBSD, OpenBSD, and NetBSD too. They also boot from the BIOS. […]

Platform Agnostic C / C++ / FORTRAN Tooling

Who could have predicted that cross-platform native builds would be this easy? As it turns out, they’re surprisingly cheap too. Even with all the magic numbers, win32 utf-8 polyfills, and bios bootloader code, exes still end up being roughly 100x smaller than Go Hello World

Compiling the Julia runtime, to portable (fat) binaries shouldn’t be so hard (with her Cosmopolitan libc), but booting to Julia will be a challenge, as Julia requires an OS present… that will be a lot of work to figure out for full Julia, and likely also(?) binaries made with the help of StaticComplier.jl binaries. A first step would be to make a portable Julia, in name only, that actually just compiles but only works on one platform. It might just be a compile away… I’m hoping Iðm nerd-sniping someone to try, and then make more general.

Her SectorLisp (v2) challenges Jeff’s FemtoLisp (which is also part of Julia), and is smaller than “Tiniest Organic Creature” at 613 bytes, see picture):

This is the first time that a high-level garbage collected programming language has been optimized to fit inside the 512-byte boot sector of a floppy disk. Since we only needed 436 bytes, that means LISP has now outdistanced FORTH and BASIC to be the tiniest programming language in the world.


Your compiler might see that and emit assembly that formats your hard drive with btrfs. That’s the thing about behaviors which are undefined according to the C standard.

I see the issue is with Clang13/C++ (also C), not sure it applies to LLVM and Julia:

Her Memzoom is also intriguing, and the “image scaling algorithm (better than Lanczos!)” she uses, also intriguing.

I see in StaticCompiler.jl source code:

stolen from Enzyme.jl/optimize.jl at 1b187cc16953727cab26b64bc6a6dcf106c29a57 · EnzymeAD/Enzyme.jl · GitHub

So I was curious about that package (which otherwise isn’t a direct dependency):

This is a package containing the Julia bindings for Enzyme.

By working at the LLVM level Enzyme is able to differentiate programs in a variety of languages (C, C++, Swift, Julia, Rust, Fortran, TensorFlow, etc) in a single tool and achieve high performance by integrating with LLVM’s optimization pipeline.

I was lead to believe Julia is really good for AD, and does this mean all the other languages are at a level playing-field for that.


:face_with_raised_eyebrow: Is that really a meaningful limitation? I mean, what sort of sense would dynamic dispatch make in a statically compiled program?

In fact, almost all Julia code strives to avoid dynamic dispatch, even when it’s just JITed.


If by statically-compiled, “no JIT compilation” is meant, then dynamic dispatch still seems meaningful? Eg. for a call inferred as f(x::Union{Int, Float64}), you could compile both method instances of f, and have dynamic dispatch. For a complete inference failure, f(x::Any), you can compile the f(@nospecialize x) version. As long as it’s not in the performance-critical portion of the code, it should work fine.


I’d be super curious to try a mirror-version of Julia where specialization if off-by-default instead of on-by-default. I.e. a method like f(x::Int, y) is would be AOT-compiled as f(x::Int, y::Any)), and f(x::Int, @specialize y) would be JIT-compiled to whatever y's type is (current Julia behaviour)

The cost is that users sometimes have to annotate with @specialize or type annotations to get good performance, but I wonder how annoying that would be…

Don’t most static languages manage without dynamic dispatch? And since it’s something to be avoided in most Julia code anyway, I just don’t see how this is a ‘significant’ limitation (to choose a different word.)

I would bet that if you could only statically compile Julia code for a single Int input, most people would already be happy.

No, people need Floats and Vectors and Tuples, and Union splitting (for iteration) at a bare minimum.

It was hyperbole. I meant that a tiny set would already be super useful. And each program would probably be fine for just a single input type.

For me there have always been three issues with introducing Julia into our engineering business:

  1. First time to plot. On my laptop, not too slow, it took about two minutes to run using Plots. Recent versions of Julia have improved this by an order of magnitude, and so I am now reasonably content.

  2. Slowdowns. Things such as type instability, static vs dynamic vectors, row major vs column major access on arrays. I think I have understood what is happening, but there is always a chance that I will kill the very performance that I am trying to obtain. SBCL Common Lisp’s compiler shows performance hints, and perhaps this can be added to Julia.

  3. Threading. This may be something simple that I don’t understand, but every time I do threading the code is slower than single threaded code. Humph!


Are you running with JULIA_NUM_THREADS set higher than 1? Usually it’s a good start to set it equal to the number of physical cores you have, though you can try more if you have hyperthread, and less if you want to reserve some compute for the desktop environment


Does it mean that if there is nothing that I don’t like, I don’t engage in serious work? :wink:


No, it means all your work is fun.


Depending upon the kind of threading, I use JULIA_NUM_THREADS set to 2 x the number of hyperthreaded cores, or addprocs(n) where is set to the same value.

The problem is that threading can be oversold. Take pmap, for example:

using Distributed, BenchmarkTools
addprocs(4)                                        # i7 dual core
a = collect(1:size)
@btime pmap(x -> x,a)                      # simplest function

I compared map against pmap (as always, your mileage may vary):

There is a pretty solid three microsecond overhead per task for pmap. OK, it’s not much, but if the task is small then pmap is much slower than map.

I have got some useful increases in speed. I think that progress is about being more picky.

Note that pmap has a batch_size keyword argument that defaults to 1. If you set it to something larger, e.g. batch_size = cld(Size, nworkers()), it should perform better on small/cheap tasks.
Larger batch sizes of course make it less effective at load balancing, so consider the tradeoff of overhead vs load balancing for your specific application.


So you are setting it to the number of hyperthreads? If you are doing computationally intensive things it’s usually better to set to same as number of cores. Hyperthreading is rarely super effective for computationally intensive workloads


Hey, we have had some discussions about how to work with julia and .NET in Interoperability with .NET - #35 by AmburoseSekar and Correct process to use julia code from C# or VB .NET - #22 by AmburoseSekar.
I hope these links may help if you did not know about them !

1 Like