Usually that’s not a problem, but sometimes you have a data set with more than 4 billion elements.
Current world population is estimated at 7.7 billion. I guess that means when Facebook finally sign us all up they will be changing from PHP to Julia.
IS that my coat? You are so kind.
Types only catch bugs when types are incorrect. Tests catch bugs when they are computing the wrong value. In the majority of scientific algorithms, it’s easy to get a vector float64’s out, but it’s hard to get the right vector of float64’s out.
Refactoring in large code bases is easier for me in a static typed go project versus a dynamic typed rails project…
But really, even productive refactoring is more about good design than static types.
I’ve yet to build large projects in Julia but I’ve found the tooling and support made possible by the type system much more productive and reliable than my work in Ruby.
I have no reservations building critical software in Julia. The type system, tooling, and macros like code_warntype make development very productive.
I also use go alot and have started to use rust heavily.
The biggest benefit I get from static typed languages (besides knowing my programs are type stable) is readability and maintenance in that functions explicitly have their types, which also makes things like code formatting and code completion and text editor linting pretty good. For example, rust language server for vscode can lint my program and show me where an error exists before I compile, saving precious time and energy having to read compiler error messages.
Many of these tooling benefits that make me productive in go and rust I’ve also found in Julia, especially Juno.
At this point I chose Julia or go or rust based on needs related to packages, deployment, and time to deliver. Speed, memory, and reliability are usually not so we’ll defined that one PL is the clear choice… and honestly for most projects this is the case.
My take on the original post is that for the criteria mentioned you are likely to most successful with good developers. And if those developers are highly skilled in Julia then you’ll get what you need.
Some dynamic languages can have type annotations without changing the runtime semantics. Python is in this category and, IIUC, TypeScript is also in this category. Julia is completely the opposite in this aspect because the multiple dispatch is one of the biggest ingredients in the language.
I remember that “type checking/linting” is listed in the Compiler work priorities post. So, I suppose adding something like mypy (a static type checker for Python) is in a long-term plan? If so, it makes me wonder if adding type annotations (assertions) in Julia code base for static type-checking is more challenging.
Dispatch type constraints should be used chiefly for dispatch, not for artificially restricting method signatures to “known” types, or merely for documentation.
It makes sense in Julia but I fond it interesting compared to how the type annotations are discussed in other dynamic languages (“it’s also good for documentations”). Likewise, I imagine that people would not use type constraints just for improving static type checking because it can break others’ code.
Most kinds other than numeric, scientific, algorithmic, or machine learning programming! It’s kind of absurd just how little computation happens in the vast majority of computer programs…
Anyway, this resonated pretty strongly with me as a way to explain why programmers in my neck of the woods (ML) don’t benefit from static types.
(Except shape types checked as early as possible. We still want those.)
I dunno, I’ve done a lot of non-numerical programming in both static and dynamic languages and this kind of statement still doesn’t resonate. I’m not saying it’s not true for some kinds of programming, but it seems to be more limited than people who say things like “if it compiles, it probably works” would seem to suggest.
On the other hand, it’s way safer than C, C++ and Fortran, which don’t even do array bounds checking or protect you from memory errors, so segfaults and accidental memory corruption are a standard part of daily life in these languages.
I haven’t read this thread so not understanding the overall topic (sorry…), but I think the above sentence is simply wrong, at least for Fortran. AFAIK, all of major Fortran compilers support array bounds checking (via options) as well as providing other safety and debugging features (again via many options, which makes the debugging quite easier).
One thing I think very unfortunate is that those compilers do not activate such options by default, but rather the users need to activate them manually. I think this is opposite to other recent languages, which report array bounds violation etc by default.
That does make Fortran somewhat safer than C, but given that bounds checking is off by default and you still have to do manual memory management, that still puts Fortran in the “unsafe static” category. But it is a spectrum, not an entirely categorical division.
My main problem with Fortran code in the wild is that the culture of unit testing is not that pervasive. Sure, frameworks exist, but they are rarely used (except for major projects like linear algebra libraries). Coupled with the extremely long-term backward compatibility, people assume that since that code has been around forever, it must be well tested; end of story.
So some people regularly use pieces of code that were written in 1979 by someone who has retired since, has some sporadic comments at best (! 01/19/1982 John D. fixed corner case B. mentioned over lunch), and is a mystery tangle of loops, gotos, and variables used for 19 different purposes in a subroutine that is 700 lines long; a black box for practical purposes even with the source available.
Since version control is practically also rarely used, occasionally one finds 5 subtly different versions of (ostensibly) the same piece of code. Some of them must have improved something, but hard to say which, or what.
I am glad that Julia has been paying attention to CI and version control from the very beginning, so it is now baked into the culture that surrounds the language.
We had calculations running for 3-4 months that was taking about 32GB of RAM. No leaks or anything. Was quite stable and trouble-free. It was also running on 20 threads.
That does make Fortran somewhat safer than C, but given that bounds checking is off by default and you still have to do manual memory management, that still puts Fortran in the “unsafe static” category. But it is a spectrum, not an entirely categorical division.
I just want to point out that in my Fortran codes there is no manual memory management (via raw pointers), rather you use allocatable arrays, and those cannot leak memory by design. Together with bounds checking options, the Fortran that I use is 100% safe (unless there is a bug in a compiler of course).