How do you plot a result of a simulation in a statically typed language, lets say, Rust, and then plot again the result after making a modification in the integration algorithm? And then fit a kernel density to the distribution and check if it converged to what you expected?
I feel the pain of people that are trying Julia but are working exclusively, or mostly, in developing very low-level stuff with all tricks and tweaks to beat the fastest possible implementation. But I’m pretty happy in having all the high-level stuff in the same language, and having to worry with those low level tricks only in very tiny subsets of my codes, if at all, and focus on the general picture of the methods used and results obtained.
Julia solves the two-language problem but does not mean that it is the best language for specific developments in the complete spectrum of these problem, although I’m very happy to see that developers and users are eager enough to want it to be, and continue developing tooling and features to make it in that direction. I only benefit from that, on top of something I already find extremely useful.
You might have noticed Pydantic is exploding within the python ecosystem. So like with Typescript static typing is seen as the way to ‘fix’ dynamic languages.
I also benefit a lot from the high-level stuff from Julia. Every idea I am afraid of implementation in C++ now becomes production codes in Julia for my scientific research. I don’t want Julia to lose such ability. I don’t want to program with an enforced explicitly typed style anymore!
I think this is a good opportunity to explain–lots of people are confused about the difference between explicit typing (huge pain, the C approach) and static typing.
Most statically typed languages nowadays use inferred static types, where the compiler writes the type annotations for you–but the types are still there to check for bugs.
A statically-typed Julia would look basically the same as now, just without type instabilities. For example:
function gcd(n1::Int, n2::Int)
gcd = 1
i = 1
while i <= n1 && i <= n2
# Checks if i is factor of both integers
if (n1 % i == 0 && n2 % i == 0)
gcd = i
end
# Error: Type instability
# failed to infer if i is integer or float!
i += 1.0
end
return gcd
end
This prevents bugs where people accidentally return floats/ints when they meant to return the other, and also improves performance.
I think this is a good default to have; ideally, I’d like it if we had to write something like dyn i += 1.0 to let i behave in a way that’s not stable.
From some replies, it seems people are getting the trend idea too literally. Some even assume (!) that this is the old debate, dynamic vs. static.
I would suggest that there is a historically documented tendency to switch to static typing when projects become large enough (given that there is a difficulty in being properly maintained and/or a higher probability of messing things up when adding new features).
Also - these days, even languages such as C# are doing well without much explicit annotation. So, for many projects, there is no reason (obviously, some people can have good reasons to actually use a dynamic language regardless of project size) to start with a dynamic language if you know the end goal is building a large system.
I think the reasoning is pretty straightforward: either the dynamic languages or/and the tooling need to evolve in such a way that people no longer have reasons to want to switch (I am saying want - because some would like to switch but switching is either limited by budget or by other reasons).
I cannot speak for large teams here - but from my own experience in Julia’s case, there is a compromise: pay the cognitive load (and not-so-nice tooling - yet) price for some things that are not easily found in other languages (true multiple dispatch and metaprogramming - and maybe decent speed out of the box, although this is not necessary always true).
On an even more subjective note, starting a(ny) new project in Julia is a joy. Finishing a large project is a pain (especially if you don’t have 100% test coverage). The unsafe sentiment is just hovering over my mind all the time (although JET helps, the fact that I know that it also doesn’t cover all the corner cases makes it even harder to deal with the unsafe feeling). Staying in that state for a long time is not pleasant at all - and I always need to load a lot in my short-term memory before writing the first LoC when starting a new coding session. This never happened/happens in F# - where I still rarely do type annotations, but the type hints provided by the tooling just automatically document each definition.
Who knows - maybe things like Copilot (also Microsoft ) will become powerful enough to remove the cognitive load altogether and make Julia (and other dynamic languages) a bliss to use even when the project grows very large.
That sounds very unlike Julia, which routinely uses runtime dispatch and small Union-splitting e.g. iteration and sequence searches.
I am vaguely aware of local variables being inferred in statically typed languages, but are there ones that support unconstrained inference of return types from arbitrary arguments? So instead of a method signature like add(x::Int, y::Float64)::Float64 or even a parametric add(x::T, y::S)::S to support multiple call signatures, it would be enough to have add(x, y) like in Julia? Just hard to spot one in a list of a few dozen languages.
Small union-splitting should be easy to handle (e.g. By wrapping variables in Optional).
You’re right that runtime dispatches happen quite often, but I think most uses of it are accidental or incorrect; the others can be handled by opting out of type checking.
It’s intentional in methods working on types, variable-argument methods like printing, closures that capture reassigned variables, anytime @nospecialize shows up…basically where type-stable compilation would be hugely bloated with little runtime benefit or inherently unfeasible. It’s definitely preferred that runtime dispatches do not occur in a performance-critical context, but not everything is performance-critical, and a dynamic language’s inefficient flexibility can be useful.
I agree, I’m just saying I think these cases are exceptions rather than the most common cases. Most functions don’t need dynamic typing (which is why most languages get by fine without needing it at all).
Not really relevant to the thread but Haskell is still too inscrutable to me, evidently. That code and add 1 2.0 work fine, but it errors when I try to add a type annotation add :: Integer -> Float -> Float, add :: a -> b -> b, or add :: a -> a -> a, though add :: Float -> Float -> Float works.
More relevant to the thread, I also found optional return type inference in TypeScript, but it also falls back to inferring anys. I’m guessing this is a reason why a handful of other type-inferring languages require some sort of return type declaration; Rust makes it clearer that global types require it but local ones like closures don’t.
If the Num trait matches the input and return types, why does it work for the call add 1 2.0 with no annotations? Is it possible to explcitly write an annotation that fits that call?
Because of overloaded number literals. 1 is actually replaced by fromInteger 1 and 2.0 is replaced by fromRational 2.0. These expressions then get resolved by the compiler and finally, both 1 and 2.0 are then typed Fractional a => a.
ghci> add a b = a + b
ghci> :t add 1 2.0
add 1 2.0 :: Fractional a => a
Factional a => a instances are boxed, adding up could be theoretically slow. But Haskell will also try to do aggressive whole-program inline and monomorphization (it’s just like how Julia makes it possible for multiple dispatch to be purely static), and 1 + 2.0 is finally computed with the fastest intrinsics without virtual table lookup, e.g., floating number addition.
Yes, add :: Num a => a -> a -> a is a fine annotation, but you can also do:
ghci> :{
ghci| fracadd :: (Fractional a => a) -> (Fractional a => a) -> (Fractional a => a)
ghci| fracadd a b = a + b
ghci| :}
ghci>
ghci> fracadd 1 2
3.0
I believe 3.0 here does not mean Double but actually a “polymorphic value”. It can be instantiated (also, actually computed) according to the later call site.
ghci> u = fracadd 1 2
ghci> :{
ghci| y::Double
ghci| y = u
ghci| :}
ghci> y
3.0
ghci> :t y
y :: Double
ghci> :{
ghci| z::Rational
ghci| z = u
ghci| :}
ghci> z
3 % 1
ghci> :t z
z :: Rational
Thank you for this discussion so far regarding inferred types. Is anyone knowledgeable about plans to improve the tooling to writing type stable code be specifying the return as well as argument types? I understand this is not encouraged in Julia and limits the code reuse due to convert, but to opt into this the workflow could be similar to the rust analyzer workflow where it helps you to see in real-time the inferred types and any issues as you write the function, along with thread safety and type in consistencies with the db schema. I find that to be a helpful workflow. I was only able to find a branch of the vscode language server which incorporated jet. Is this planned as more development as resources allow or are there concerns it would promote code that doesn’t follow the standard advice of not including return types?
In my experience “try” is a key word here. I had quite a bit of frustration with the rewrite rules that ghc uses for this breaking with fancier types. This ghc issue was what convinced me to give Julia a try.
Julia also has static types or am I wrong? I mean it feels like dynamically typed and statically typed language in one, i.e. a superset of both.
In the same way that TypeScript is? I mean you can type all your Julia code with concrete types, and you are limited to a C-like language (with same speed, only GC overhead, and note the GC can actually make programs faster, or if not, you can do away with it partially or entirely if using a limited subset of Julia with e.g. StaticCompiler.jl).
JavaScript is a dynamic language, more so than many, e.g. integers and floats unified into just floats. You list TypeScript with “static types”, but isn’t it a superset of JavaScript? I.e. it would be wrong to call it a static language?
If you type all your code there then in effect it would be, but you can also not do it selectively, so pretty much like Julia (without its other powerful features like multiple dispatch and macros). I see for it (duck typing is same as in dynamic Python):
Static typing is a strict subset of dynamic typing, so every dynamic language is a “superset of both”.
The thing people complain about in dynamic language like Julia is that there are cases where they want the compiler to throw an error at compile time if the static inference fails, instead of just falling back to dynamism.
It’s basically the same as how languages with manual memory management are “simply lacking a feature” relative to languages with garbage collection, but nonetheless people using languages with garbage collectors often complain that they wish they had more control over how allocation occurs, or that the package ecosystem was more friendly to manual memory management.
Right, so you agree TypeScript is a dynamic language too, and why do people not seem to complain much about it or JavaScript?
“static inference [fails]” is a computer science term, does anyone want an error at compile (or runtime) for it? When does that happen in Julia? I’m not sure, but I think it doesn’t, all code just works, possibly slower, or with (avoidable) recompilation? About runtime errors, see about claimed statically typed language Elm: