Will Julia ever fix its "using ..." latency problems?

When I’m in the early phase of development that requires frequently redefining structs, I like to swap in NamedTuples. You can’t use Julia’s dispatch system, but they behave enough like structs to allow for drop-in replacement in most simple cases (concretely-typed fields, dot syntax for field access).


It would be nice if Julia would mention its using … performance disadvantage on their website. I hate it when open source software promises alot more than it delivers.

Well, that is just rude with the developers. First, because it is a matter of priorities. I never felt that so badly as you. Second, because NO software, nor anything else, starts showing up what can be its possible disadvantages relative to every other alternative.


You mean how the Python webpage warns you of a two order of magnitude performance disadvantage to writing a for loop?


No it is not.
Remember that I did not criticized Julia for having disadvantages, but I criticized them for not advertising their disadvantages.

I don’t know what you mean.

Every software damn well should.
Very few software does it (the only example I just found is meson ).

Just because everyone fails to do this does not mean that it is right to not do it.

1 Like

They don’t but I think they damn well should.


But Python is fast! Look at this benchmark where I call numpy (a C library) and its fast! No other language has the feature of calling C libraries!


I meant that I do not think that is a disadvantage that is worth mentioning at the top Julia page. For me it was never an issue. There are other things in Julia that I like least than that, and I do not think that they should be put in the top pages either. Also, it is written in the docs, explained, discussed, and people are working on it.

It is written here: https://docs.julialang.org/en/v1/manual/modules/#Module-initialization-and-precompilation

There is also a dedicated page to the differences between Julia and other languages, where every field can be considered an advantage or disadvantage, depending on the user: https://docs.julialang.org/en/v1/manual/noteworthy-differences/


julialang.org has


Julia was designed from the beginning for high performance. Julia programs compile to efficient native code for multiple platforms via LLVM.

prominently displayed, so it’s clear that compilation is needed. This “time to first plot” discussion has been around forever, it’s hardly a hidden pitfall of Julia.


Maybe here

should be a a note about julias using… performance.

You can propose the addition there directly as a pull request, wherever you think that is important. At the end, it is free open source software.


This is actually a quite ellegant solution for long and complex interactive prototyping sessions

May I ask how long you’ve been around with julia? I certainly remember the days of 0.6, where e.g. using Plots took tens of minutes on the very machine I’m typing on now.

Also, there have been multiple links in this thread (and in the linked issues and other things) pointing towards improving that state even more. As a short recap what has happened in the past few months:

  • Deep investigations what made compilation slower than necessary
  • How to remove those unnecessary slow spots
  • Build better package serving infrastructure (PkgServer serving tarballs instead of downloading git repositories from GitHub…)
  • Figure out why some packages and code patterns took a long time to compile and fix those code paths in the compiler
  • Make the compiler itself faster
  • Investigate issues why certain packages increase load time dramatically
  • Fix those packages (this is the recent “crusade on invalidations”, where package code and functions have to be recompiled because a newly loaded packages caused a lot of already cached and compiled functions to have to be recompiled)

Some future work includes:

  • Cache not only lowered & typed code, but also native, compiled machine code
  • Cache more code between environments
  • Cache more code
  • Reduce compiler latency
  • maybe even serve compiled code?
  • Did I mention cache more code, even compiled code?

I’m only 95% sure that nothing similar would crop up, but yes, these specific problems are gone insofar you can’t express that idea at all anymore.

Without eval and @eval, there is no REPL and there is no interactive mode. There’s also no recompilation at runtime and there’s no shadowing of existing things in the same namespace since there’s no way to “redefine” anything like a struct or a function at runtime. There’s also no convenient creating of similar function expressions and writing boilerplate (where you define functions with different names but that can take the same arguments) would be a dread. Julia would just be another statically compiled language like C, C++ or Rust (or any number of other compiled languages).

It certainly wouldn’t be the end of the world and most of the time when people want to use eval, they really shouldn’t anyway (don’t parse & eval strings to emulate bad macros!). Their existence does make certain things much easier and provides a lot of flexibility though, so I’m really glad we have them. You also learn a lot about why it’s a bad idea to allow redefining everything at any time, so being limited to certain places makes the language much more powerful & performant as a whole.

Always remember, the really fast parts in Python are not written in Python :slight_smile:


The search and replace trick to update structs works just fine with modules. I usually go for weeks (working everyday) without restarting my sessions, and I’m not even using Revise (I usually reload manually using @eval MyModule include("myfile")).

Only since about a month. In my bachelor thesis of physics, I wrote rust and python and for my master thesis of physics, I’m hoping to replace both of them with Julia (at least for scientific computing).

I’m happy to see that Julia is very actively developed.

Thank you for your explanation.

Of course.

1 Like

See https://github.com/JuliaLang/julia/pull/38572


I figured that’s why JS is so common, I’d think Go or Elixer would be good for front end, maybe Chrystal one day.

On using DifferentialEquations being slow; I recommend just not pulling in all the DifferentialEquations.jl packages in this way. I regularly use ODE/SDE/jump solvers, but never load the full meta package. I really think that at this point the meta package should only be used for simplifying tutorials aimed at beginners. For solving ODEs usually just

using DiffEqBase, OrdinaryDiffEq

is sufficient to load everything one needs. For SDEs you’ll also load StochasticDiffEq, etc. If you just load the set of component packages you need you’ll find a much reduced import time.


You can find all open performance issues on Github. Listing these on the julialang website would be redundant and overwhelming — that’s not the place for them.

While it is work in progress like all modern programming languages, a lot of people are using Julia productively. Instead of complaining about it being “nearly unusable”, I would recommend that you just invest into learning the workflow that fits the language.


Does jupytext work within IJulia or is it a different deal entirely?


$ julia --help-hidden
julia [switches] -- [programfile] [args...]
 --compile={yes|no|all|min}Enable or disable JIT compiler, or request exhaustive compilation

I call julia -O0 -compile=min the “Python”-option. It’s some minimal compilation (and yes, Python has compilation too; its is minimal, to bytecode). Feel free to try, -compile=no while that’s going to far, you immediately get all kinds of strangeness.

So officially documented is “no” compilation, as an option, implying interpretation.

I can see your point, while you put them to a very high standard, higher than I guess any proprietary software and most open source.

Before we put in a disadvantages section, it’s good to know what’s already there to fix startup-slowness, or at least link to the (partial, for now) solution.