I think to many people who want some sort of static function annotation system, these points you raise aren’t points against having such a system, but instead seen as positive reasons why they want to be able to say “no, this function should fail to compile if type inference fails”.
It has nothing to do with performance. Compiling to Rust is about that. Compiling to Python is about accessing a very rich package ecosystem in the area of ML/DL…
The value of Python no longer derives from the language itself (for a very long time now).
for sure but these are practical barriers because IF we have such a system and the pkg inference properly for developer, but doesn’t for user, now you have un-testable errors that only stops user code?
That’s a different notion of effect system. Here, the compiler tracks certain properties of functions to enable certain optimizations. While this has some relation to typing functions with their effects, algebraic effects in addition provide a user-facing API to inject side-effects into functional code – as an alternative to monads. In particular, some effects require resumable one-shot (cooperative concurrency) or full continuations (non-deterministic choice). (Delimited) Continuations are currently not supported by the Julia runtime and thus algebraic effects would be difficult to implement in full generality.
Oh, I totally get that, I’m just so used to compilers targeting C, MLIR, or LLVM that it seemed unusual.
Although, how does compiling to Python improve compatibility compared to a Python FFI? The main thing I’d want to do that I can’t do with a simple FFI is use and extend Python classes, and I don’t know if that would be helped by Python compilation. Unless you’re just talking about IronPython, but then doesn’t that exclude using most Python libraries?
There are lots of other good stuff that comes from this - namely, the F# bindings (e.g. type safety) to lots of Python libraries. Also, it uses typeshed in the back.
The main goal of translating F# to Python has nothing to do with performance: it is just a nice way to leverage the F# features and type safety in the Python ecosystem.
Maybe some other people might no need additional reasons than it is fun and can be done
There is another (often overlooked?) difference between dynamic and static types:
Dynamic typing means that run-time values have types
Static typing assigns types to syntactic expressions, e.g., variables, but not their values.
In many statically typed languages, no type information is available at runtime. Thus, when dynamic dispatch is needed it has to be opted-in explicitly in order to keep some runtime information around, e.g., Rust’s trait types Box<dyn SomeTrait> keeping around the vtable to dispatch to the corresponding trait implementation at runtime.
Besides giving the compiler important information in order to generate more efficient code, tracking the types of syntactic expressions enables dispatch (static that is) on return type:
ghci> :t pure
pure :: Applicative f => a -> f a
ghci> (2*) . pure 3 $ 4 -- uses pure from Applicative instance for functions
6
ghci> [1, 2] ++ pure 3 -- uses pure from Applicative instance for lists
[1,2,3]
In contrast, in Julia you can always query the runtime type of any value using typeof and furthermore, this will always return a concrete types (as abstract types don’t have values, i.e., cannot be instantiated):
julia> v = Vector{Any}() -- concrete vector that can hold values of any type
Any[]
julia> push!(v, 1.0); push!(v, 2) -- add some values
2-element Vector{Real}:
1.0
2
julia> eltype(v) -- vector claims to hold abstract types
Any
julia> typeof.(v) -- yet, each value has a concrete type!
2-element Vector{DataType}:
Float64
Int64
Modern languages often blur the lines, e.g., as Julia when using static type information to de-virtualize dynamic dispatch[1], or static languages when keeping around runtime type-information. Imho, the basic distinction wether types exist at runtime (dynamic) or purely at compile time (static) is still valid and helpful.
In this respect, Julia is definitely a dynamically typed languages (with clever optimizations base on partial static type information).
Interestingly, basically the same idea of compiling different versions of generic functions is called monomorphization in static languages ↩︎
Definitely. To some extent Julia does this; if it can figure out a way to assign a type to each expression, it will. This is more obvious in some other languages (like TypeScript or type-checked Python) that allow both.
Well, they are not mutually exclusive, but the defaults are very different, i.e., what to do if the compiler fails to assign unambiguous types to all expressions:
Static: Reject the program, e.g., giving some typing error
Dynamic: Let it run anyways and see what will happen at runtime
Also, the type-inference of the Julia compiler is comparably simple as it mainly needs to work out the types starting from some specific call, i.e., with specific concrete types as input, and can just give up, i.e., infer Any, if in doubt. In contrast, for a statically typed language it would be rather annoying to reject all but the simplest programs and accordingly type systems have become considerably more complex, handling parametric types, trait bounds, higher-kinded types and what not.