Since Julia blurs the line between compile time and runtime, shouldn’t we talk about pre-runtime and runtime instead?
For instance, can Julia catch type errors at pre-runtime?
I hate when my Python code crashes for stupid shape-related mistakes that would’ve been caught at pre-runtime in languages such as Scala and Haskell.
AFAICT, Julia leverages types to generate fast code and not so much to catch as many mistakes as possible at pre-runtime.

It might be blurry, but if you squint your eyes, you will see that there are two distrinct lines though. :slight_smile:

Not really when “normally programming” Julia. You could implement type of type checker yourself but in general in Julia, the exception will be thrown at runtime.


There is already a term for the compilation pass: ahead of time (AOT). See

It is tricky to define what this means in the context of Julia. Eg

struct Foo end
f(a::Foo, b::Foo) = a + b

may error because the programmer hasn’t defined an applicable method for +. But if that is done before f is run, then everything is fine.


I’m not describing the type of compilation, but a period of time. pre-runtime is the interval of time (the starting point doesn’t matter) which ends when the program starts running.

I said at pre-runtime so there’s nothing tricky. The answer is simply “No”.

You are talking about quite a few different aspects.

Julia does’t blur the line between compile and run. It merely interleave the time they happen. Since it never changes what is done before the run, there’s no need to rename that phase.

This is a completely orthogonal question to compilation. The compiler is basically always transparent to the program. A C interpreter will have just as many static type checks as a C static compiler or one of them would not be a conforming implementation of the language. Julia is, above all, a dynamically typed language and adding static type check to the luaguage is changing that and isn’t about how compiler interleave with runtime. In fact, in order to get a well defined type checking before runtime, you need a clearly defined phase before running where the error can be thrown. A blur line between compilation and running as you claimed is actually exactly the opposite of what you want.

Now julia could run user code runtime and actually before compilation, those are macros, generated functions and (new) customized optimization passes. Out of these, pretty much only macro is defined as a separate phase (since type inferece and optimization is not well defined/don’t have a stable API). And you can of course do whatever check you want in a macro, just not based on the type info, which requires a complete change of language to have it’s own phase.


Does Julia do some kind of static checking during JIT compilation?
Are type errors in statements (expressions) caught only if those statements (expressions) are actually executed (evaluated)?
I suspect the answers are Yes and No, respectively.
Python (without optional typing) can’t catch type errors in statements (expression) that are not executed (evaluated).
Static languages such as C and C++ are expected to catch all those errors during (AoT) compilation.

You claim I’m talking about orthogonal concepts, but I don’t agree with you. The way Julia runs and compiles code does influence the set of errors caught at runtime, which means that while Julia compilation model doesn’t alter the semantics of type-correct code, it does influence the semantics of code that contains mistakes.

Kind of.

More or less yes.

This is wrong. CPython may not have that implemented and it might not be worth doing, but a JIT can for sure catch type errors just like the “static checking during JIT compilation” in julia. (and this is why I said “kind of” for the first question). PyPy for sure does it. (as well as whatever JIT it was from dropbox…)

And that’s because they have a well defined compilation phase before the runtime. Again, as I said, the blur line here due to the nature of a dynamically typed language works against throwing an error at compile time.

No it does not. If compilation or other ways of running the code changes what errors are thrown, that’s a very serious bug that should be reported. We certainly have had such bugs before and probably still now.

1 Like

I never said it can’t be implemented that way. I meant Python can’t catch those errors right now, and it’s a little late to change that since you’d be breaking previous code.
Just consider a Python function with many execution paths, some of which become type correct only later. Such function would be virtually impossible to JIT compile as a whole and separating the good from the bad execution paths based on the calling code is undecidable, in general.
[last edit: OK, maybe there’s no need to separate the paths: one just compiles as much as possible.]

I’m not saying that if there are 3 different ways of running Julia code then you should get different behaviors.

What I’m trying to say is that the initial compilation model chosen for a new language influences the sets of errors which are caught at pre-runtime and runtime.
Although compilation model and type checking are technically orthogonal, in practice they’re not because, for instance, it’s natural for an initially fully interpreted language to catch errors later (if at all) than a JIT compiled language.
I use the terms pre-runtime and runtime because that’s, IMHO, the only useful distinction, at least for the user, when talking about releasing a product with as few mistakes as possible.
To someone who has programmed both in static (e.g. C++) and dynamic (e.g. Python) languages, Julia will appear as a hybrid because some errors which would be caught in popular static languages won’t be caught, and errors which wouldn’t be caught in popular dynamic languages will be caught.

This is also the case in julia. The compiler makes no difference anywhere.

Well, sure. But I’m saying that the property you are quoting (blur line between compilation and run/interleaving time between them) is an strong argument against type checking, rather than for it, as what you mentioned in the first post.

Also, it has well passed that stage. With or without a compiler, it is not really a reason for making a breaking change to the language anymore.

Sure. I have no problem with the name pre-runtime. However, in julia, it’s not the only concept that matters for the user, there’s macro expansion which separate parse time and lower/compile time.

I don’t believe this is the case, not in any way that’s related to compilation. The type errors you get are basically from explicit type check/assertion, which exists in other dynamic language (and in fact doesn’t make as much sense in a statically typed language), or method error, which is indeed not really present in many other languages but it’s the result of multiple dispatch and not compilation.


I don’t understand how Julia can produce fast code and yet be as dynamic as Python. I thought Julia (JIT) compiled functions as a whole and did type inference to produce efficient code. Maybe when type inference is not possible (because of “temporary” or “permanent” errors) it falls back on less efficient code? And what happens when the type information is finally available? You said the code is compiled only once, so I guess the user should call a function for the first time, triggering the JIT compilation, only when enough type information is available, for maximum efficiency?
In practice I’d split the function into two simpler functions, of course. I’m just trying to clarify a doubt.

It’s not as dynamic in general but it is about as dynamic in terms of variable type. We don’t have meta class, for example, don’t have dynamically changable fields (roughly equivalent to __slots__ in python), and we allow annotating field for their types, which are about types but not about variables. We also don’t have local scope eval, which kills pypy performance very badly at least as of a few years ago, and we have overflowing integer arithmatics just like python 2 but unlike python 3.

Yes but it uses runtime type (and inferred types based on the caller), which has little to do with any type annotations so it has little static about it.

That is correct. (Although there’s also precompile function that lets you feed the type info manually and without running the function).


Thank you for clarifying my doubts. I’m looking forward to using Julia as soon as it gets a little more mature, especially as a differentiable programming language. Python and Pytorch are enough for what I’m doing right now and Jax (based on tracing) adds some flexibility, but Python will never be truly differentiable.