In many cases it can be hard to exactly know the types that will come out. One major example is matrix factorizations. Each “matrix type” has its own Julia type, because it’s easy to see how specializing on matrix types can give you much faster algorithms (tridiagonal solvers, banded solvers, etc.). But the generic \ does not actually know which matrix type it needs to internally create. But that doesn’t matter: it can at runtime find out, factorize into the appropriate type, do a ~100ns dynamic dispatch to then solve \ using the specialized matrix type. 100ns is peanuts compared to the rest of the computation, making this a very good idea.
For this to be completely internally type stable, you would have to:
- Always have the user decide what factorization to use
- Only use one factorization
- Always factorize to the same type, and at runtime use checks for what matrix operations to perform in the resulting steps
(1) and (2) are restrictive design-wise (and are what you see in C APIs). (3) is slower and is what you see in something like MATLAB. An internal type-instability solves the problem beautifully, taking a single tiny internal dynamic dispatch (a “function barrier” if you will), to get the flexibility and performance.
And you’ll notice that in many cases, you want a design that is “type-instability + function barrier”. This allows you to make a high level algorithm choice which then drops down to an efficient compiled code, with a pretty small runtime check (the dynamic dispatch) in the middle. NLsolve + ForwardDiff and the dynamic choice of the chunksize is another example of this, where there’s a tiny runtime cost that then enters into an efficient code.
The design space is just much larger when you allow this. It’s actually a very unique feature in Julia that should be embraced and not avoided.