To preface, this is just one person’s attempt to help people save some time and engage with a new language with more reasonable expectations, not a standalone or authoritative source of information. I expect learning about Julia or any mentioned topics to be mostly accomplished by studying other sources, and I just hope that my language here is precise enough to provide some helpful search terms. If a reader can clear up some misconceptions by skimming the headers or only reading a few sections, that’s fine too. In no particular order:
Misconception: Julia is the last programming language, the one that will replace all others
Reality: Julia offers an optimizable dynamic language and seeks to work alongside others
To be fair, this myth didn’t come out of nowhere. The first official blogpost in 2012 described the creators as “greedy” for features found in various programming languages, and the 2012 paper proposed Julia as an alternative to “two-tiered architectures” in technical computing with “high-level logic in a dynamic language while the heavy lifting is done in C and Fortran”, which was soon called the “two-language problem”. However, the blogpost didn’t list every language and their features, which would be futile because many features and paradigms are incompatible. The 2012 paper also declared calling C and Fortran routines as a core feature, so despite aiming to reduce the need to wrap those languages, Julia was also designed to do so. Likewise, Julia users embrace great work done in other languages, so the ecosystem does not shy away from wrapping binaries and interoperability. To return the favor in the former way, the experimental JuliaC aims to produce binaries of a manageable size.
Like most programming languages, Julia is an opinion on how to program, specifically in an optimizable dynamic language, and it’s entirely valid to have different opinions and needs that are better served by other languages, for now or forever. For example, Julia is fairly permissive of side effects, so referential transparency and its perks are far more feasible in pure (or more dedicated) functional programming languages. As far as I know, there is no language similar enough to Julia to avoid learning and adjusting to different tradeoffs in a language transition. There are also many good reasons to use a language besides its base design. Perhaps your colleagues or field communicate and collaborate with certain languages, and many great (and well-funded) developers maintain a particular library in one. People can have any kind or number of reasons to invest their time in any practical language, so the most general advice I can give is to consider your needs and keep an open mind.
Misconception: Julia’s compiler is an easy guarantee of peak performance
Reality: Optimizing compilers aren’t the only reasons for performance, and languages without one can be fast
The fact that some interactive languages lack optimizing compilers often misleads the users into believing that their code is suboptimal for that reason, and they are surprised when otherwise correct Julia ports don’t improve performance.
- “Slow” interactive languages can load binaries compiled from fast (and not-so-interactive) languages that occupy almost all of the runtime, a negligible difference from using the fast languages directly. Practical languages, including Julia, generally do this to avoid the maintenance burden of reinventing wheels. The limitation is that loaded compiled code can’t be further optimized together or with the wrapper language, but that isn’t a guaranteed loss of performance, especially if the bottleneck is elsewhere.
- A compiler is not designed to change the code’s meaning, so performance primarily depends on the algorithm. If you’re observing great performance and optimizations in seemingly simple code, a very good developer implemented it for you.
- Compilers don’t do all known optimizations. That’s why there are libraries with platform-dependent routines, even in assembly.
Misconception: Multiple dispatch generalizes single dispatch in object-oriented programming, so the runtime overhead must slow down the program
Reality: Multiple dispatch usually occurs at compile-time and is not a generalization of OOP
I’m guessing that this is a mixture of overgeneralizations in the few relevant Wikipedia articles and some misinterpretation. Compilers for many languages can dispatch calls at compile-time over inferred dynamic types, and Julia’s Performance Tips are largely about achieving multiple dispatch at compile-time to enable compiler optimizations. It’s similar in principle to function overloading, which is multiple methods over static types. While multiple dispatch does generalize single dispatch and single dispatch is often represented by object-oriented languages, multiple dispatch in Julia (and CLOS) is not compatible with class encapsulation of methods, the basis of OOP and its particular perks.
Misconception: Multiple dispatch lets us use any libraries together
Reality: Composability is more flexible but not effortless to implement
Multiple dispatch and JIT compilation does facilitate independent packages working together or extending each other, e.g. Package1.function1( a::AbstractArray) working on an instance of Package2.Type2 <: AbstractArray, which falls under the systems principle of composability. However, that does not mean that any two packages are automatically composable. Package-mixing calls can fail upon missing methods or incompatible types, and although Julia can catch these during precompilation given enough static dispatch, Julia does not yet have a formal interface system to facilitate method implementations in isolation. These are the relatively convenient fraction of API issues. Algorithm bugs typically must be caught by runtime tests, and subtler wrong assumptions are sometimes only challenged by an independent package down the line. Composability and its principles also exist in other languages, often referred to by different terms like duck typing, and like any good feature, it needs effort, tests, reports, and patches.
Misconception: Compiler-inferred types are a form of static typing
Reality: Julia is considered to be strictly dynamically typed
Type inference and static dispatch during compilation has led a few people to assert that Julia has static or gradual typing, but that miscategorization is based on a divide between static and dynamic typing based on a handful of language implementations. The divide breaks down beyond that e.g. a GHC option to defer compiler-detected type errors in Haskell to runtime. The typical key points that justify Julia’s strict dynamic typing are:
- Method dispatch is entirely based on the types of input objects. There is no separate dispatch feature over the compiler-inferred types associated with expressions.
- Compiler-inferred types are not on the language level, and the perk is allowing type inference to improve across patches and minor versions. Optimizations do typically occur if they match the runtime types exactly, but optimizations can also occur if they don’t.
- Although a type can be declared on the language level for a variable or field, it is still a runtime type that restricts the assigned object’s, and it is allowed to be compiler-uninferrable e.g. var2::typeof( (var1::Ref{Any})[] ) = valuecan only infervar1[]asAny, but its runtime type restrictsvar2instead. To be fair, these declarations are usually used for the same benefits as other languages’ static types are; for example,structs in practice rarely have the default::Anyfields.
Some languages do refer to features less important than Julia’s declared types as static typing, but those are more easily justified as expression-associated types on the language level, and it’s worth acknowledging that there isn’t a universal and precise concept of dynamic vs static typing, let alone other programming terminology.