One of my upcoming conference slides is going to be
GENERIC PROGRAMMING
with nothing else on the page
One of my upcoming conference slides is going to be
GENERIC PROGRAMMING
with nothing else on the page
Second you on this. Fortran 2003 or 2008 is not bad at all. For example, I actually liked elemental
functions, which automatically broadcast scalar functions onto arrays (like a Julia dot-syntax without a dot). But problem is that most people still program in a Fortran 90/95 way (if not 77). Many still write one subroutine that does 10 different jobs and wonder why their code is so hard to debug. And despite the ISO types (iso_fortran_env
) that ensure portability, people continue to hard-code kind parameters to select double precision types. It seems that a lot of good modern Fortran features are underutilized.
LOL. I havenât been there yet. Definitely gonna check that.
Why is that? Iâm somewhat ignorant of modern Fortran (or old e.g. FORTRAN 77).
[I just posted another question on Juliaâs âmemory modelâ, I got no answer, still assume aliasing is the same as in Fortran.]
Do you have any idea of features in the language that keeps them using Fortran, as opposed to just inertia (or because of libraries, as Julia can reuse them all)?
Besides the OO features Fortran now has (I doubt itâs that, nor that it attracts many to Fortran), thereâs now a tail-call optimization (TCO) guarantee.
I also doubt it that, as I think of Fortran as more of a imperative, rather than a functional (as in Scheme) language. Julia doesnât have TCO (only in FemptoLisp), I think not even in the restricted sense as in Scala, where itâs only for self-recursion. Scheme has TCO even for mutual recursion. Seems the former is easier, and Julia could well adopt, at least if this is an important advantage of Fortran.
http://fortranwiki.org/fortran/show/Fortran+2018
"Most new features in Fortran 2018 are defined in two Technical Specification (TS) documents:
This was a little surprising to me, as I thought it had good C interop (just to C++ always a problem), and for Julia at least I canât really see how better it could be (except for the row vs major, I do not view to be an issue, as 1D arrays/strings are ok), possibly they are tacking the same 2D+ issue Julia has?
Keyword âin useâ; as your package is state-of-the-art (and includes all of those [Fortran] solvers and more), it should attract people to Julia?! Or do (most) people not need all the new solvers? I also read all the options could be paralyzingâŚ
From what I can tell as someone who reads a lot of Fortran but writes basically none: Fortran is conceptually simple, and nearly always fast. Its just hard to write really bad Fortran. Its like GO from the 70s for scientists, simple, easy to understand, few confusing abstractions. Scales well. It ends up with a lot of code duplication where Julia would push some complex abstraction, but I can see the attraction of that.
People donât seem to be using many post 95 features, not that I know them all, but mostly things are kept super simple.
In my experience it is very easy to write really bad Fortran. In my career I have seen pages and pages of bad Fortran. But, even really bad Fortran program can be fast.
Just start with a COMMON block. The rest of the bad Fortran didnât take to much effortâŚ
For economics it would be a great tool. For data science and machine learning, it needs some work, but it can get there soon. I would suspect, for teaching programming with focus on scientific computing it would be ideal.
I love that C[++] offers strong typing. I loved that the old Fortran (which was all I ever read) seemed to have cacheable (I->O) / parallelizable functions (i.e., no globals with sideeffects).
I love that julia has generic programming with weak typing, allows some strong typing, has data sets, and parallelization (though not complete yet). I donât like that julia does not allow me to require strong typing and that I cannot tell the compiler that a function should not have accidental globals / side effects. I like compilers to tell me when I make errors, beside the fact that the compiler can often use this to speed up the executables.
Having something the option to make a function throw an error if it has a side effect would be a neat addition to Julia. Not sure how realistic it is though.
Yeah thatâs what I meant. Its all pretty ugly from a programming perspective, but it still runs fast.
In something like C++ or particularly Julia, this isnât true.â
Itâs not really better. Itâs a lot of inertia. Thatâs part of why as @Raf points out a lot of Fortran still uses very old styles: all loops and array codes.
But there are practical reasons. Itâs much easier to get fast scientific code in Fortran than C++, MATLAB, Python, and R, and so itâs the one people know of as the âfast choiceâ. Itâs what a lot of old codes were written in, so you have to use it. A lot of PhD works by âadviser got famous from doing X, itâs a Fortran code, my paper will improve X by adding Yâ, and so the path of least resistance tends to be to make modifications to existing Fortran code or at least using it as a blackbox somewhere, but if youâre already compiling Fortran (and likely reading the undocumented code) youâre starting to use it. More labs have gone to C++ as time has gone by, but a lot of applied math is still in Fortran. If you want to move to Julia, a lot of time a re-write is in order which is really only in the realm of âcomputationally-capable studentsâ. There are a lot of people doing applied mathematics in science, but very few seem to have the ability (or the drive) to really do/learn software development, and so thereâs a big barrier here. However, again as time goes on, packages are growing to handle what people had traditionally done and thereâs educational changes, which flows into the next sectionâŚ
This is probably a much more difficult and multi-faceted question than you originally intended, and it was the problem I set out to solve (I was naive and didnât know how difficult it was either).
@Raf noted before that RK4 is perfectly fine for a lot of his problems. Great! A lot of people can make use of simple ODE solvers which are quick loops. R/MATLAB/Python are absolutely horrible at this (functions which are loops with higher order functions⌠the SciPy developers even called it a worst nightmare for Python). Julia, Fortran, and C++ are great for writing these quick solvers, and Julia allows for the cleanest code without sacrificing speed here.
Thereâs a significant portion of the community that is writing this out by hand because thatâs how they learned to do it. And if youâre not running into major performance issues, thatâs fine (though you donât have an error control if you do thisâŚ). But there is still a going mantra that it has to be done because differential equation solver software is not flexible enough to handle a lot of scientific codes. The education on PDE solvers is particularly bad because I know a lot of people are taught how to solve PDEs without recognizing that PDE solvers such as Crank-Nicholson are essentially just conversion of a spatial system to a large system of ODEs and then applying Trapezoid()
, where at that point Trapezoid()
is an arbitrary ODE solver choice out of the whole list of possible ODE solvers. But as time is going on, more and more people are learning/teaching that you donât have to write entire PDE solvers out by hand and that you can use an ODE solver in there (and have it be more efficient and more accurate). PDE people care since those are quite computationally expensive.
Even when PDE people are using ODE solvers, a lot of people still think they have to hand-write the ODE solver part because their scientific model cannot fall into a single vector. Youâll notice that a lot of my later developments were showing how to use AbstractArray structures in the diffeq solvers to encode scientific models specifically to address this group that keeps saying that what we have done isnât possible. Most people donât understand generic typing and code generation enough to see how this is all done without losing performance, so we just have to bang out enough examples for people to at least know that they donât know why it works but accept it.
Even then, there is still an educational problem with the new methods. A lot of people associate âsolve stiff ODEâ with âGEAR methodâ or BDF. Of course, recent literature and benchmarks show that newer things like exponential integrators are a much better choice, but people have learned ode15s
and LSODE
. For some reason, itâs still being taught that those methods created back in the 80âs were the best possible things that could be created so thereâs no reason to look for anything more modern. What we can do there is just offer very good versions of BDF methods (Sundials CVODE
, LSODA
, and in the next release weâll have a bunch of pure-Julia ones which have some methodological improvements), try to teach people when the new methods could be used, and document it thoroughly. But most of the time when people are taught methods they are taught in a classroom setting where efficiency doesnât really matter as much, and people get comfortable with a set of methods they know and stick with it.
That is until they really need more performance. So what I am trying to present DifferentialEquations.jl as is a tool that someone can grow into. I know that most users will likely never use the vast majority of it, but they can be confident that as they move to more difficult problems and related problems (sensitivity analysis, parameter estimation, etc.) we have a solution which is there for them. At the same time, I think we can mask a lot of the details by increasingly using better defaults and having tutorials just touch on the features that most people need. I describe how we keep going deeper into default handling in a recent preprint:
So in summary, the people that really really need the newest efficient methods are really really enjoying it (and I get a lot of feedback there, just from a smaller but more active population). A lot of other people are pulled in because they need the extra features (continuous function outputs, parameter estimation, etc.). But a lot of people (even in my own lab) are doing just fine letting their ODE run for 2 hours using SciPy + LSODA. Could they do better? Much! But the incentive to change is small if it works, at least until they try scaling up the project. The project seems to be growing in use mostly because the hardcore people need it, and the hardcore people teach it. And then it becomes necessary for a lot of PDE projects or larger scientific models, so researchers are using it. But anything works just fine in undergrad math courses so we have had little growth in that segment of users.
ButâŚ
If it is, open an issue on whatâs confusing. We can always hide details later in the documentation.
Iâm a bit unclear about what you mean by âstrong typingâ and âweak typingâ here. Do you mean static type checking versus run time type checking? Or something else?
apologies, my jargon is lacking. I am not a computer language expert, just a computer language end user.
indeed, I mean every variable must be explicitly statically defined and is compile-time checked. if I want generic programming, I must still define the variable as inputvariable::Any. Same thing for the function return value.
the casualness of run-time checking is a convenience advantage in interpreted languages, in which I want to vomit down a few lines of code quickly, and run my program asap and often only once. I donât think implicit declaration / run-time type checking is a big advantage in Julia, because I have to wait for a while (compilation), anyway, and because I plan to use julia for the big projects.
as far as I am concerned, the more compile-time checking for errors and warnings (lint), the better. ok, maybe not always, but for large projects, this is a big plus of gcc.
I donât know if I will add much to the discussion here. I work on the systems side of HPC nd have had experience with two companies recently.
I tried to evangelise for Julia at ASML, and there was really not much interest.
Same as in the Danish biotech firm I work for at the moment - they are a Python shop and its going to stay that way.
I know this is going to sound bad, but changes like this are driven by bright spark users, who start using Julia (for instance) then ask for it to be installed on $BIGHPC
I was particularly sad at ASML, as I discussed with a software training company which runs code workshops there. It is common for scientists there to model complex robotics using Python⌠then recode in C for speed. Which is one of the reasons for starting with Julia in the first place.
Being more constructive, there are of course Julia books, the excellent Julia documentation, beginner tutorials on Youtube.
If anyone has one though, is there a slide deck on âa taster for Juliaâ.
The sort of thing you can deliver in a 20 minute lunchtime talk, or at a conference if you get a short slot?
I do not have a direct opportunity in mind. However when I did a short talk on Julia at ASML I found it difficult to not swerve off into the bushes and start to define multiple dispatch or the type system.
In a short, punchy talk you want âHey - look at all these lovely things Julia can doâ
Why would this be a problem? These are key advantages of Julia. Presumably, the target audience of such a talk are programmers, who should be familiar with these terms.
IMO simply demoing that âJulia is fastâ per se is pretty useless. Showing why it is fast requires discussion of the above two concepts, among other things. The key is that one gets speed in a convenient and generic way.
I donât think this is super-useful at this stage. Languages with a mature library ecosystem can do a lot of lovely things, and may easily beat Julia in this comparison.
That said, I donât think that evangelization of this kind works. New ideas spark resistance. Languages spread when its users (in academia or the private sector) appear to be productive, others wonder why, figure out that the language has something to do with it, and try it out themselves.
Also, when someone has a working product that makes money for them and a smoothly functioning team of programmers for language X, it would be pretty unwise to replace X with another language just for the sake of it, no matter how great or useful. Such decisions are usually undertaken to solve some challenge that would be difficult to overcome with the existing setup.
Very well said.
Your point about multiple dispatch and the type system is well made also. What I was trying to say was how do you present them crisply, and not go off into deep technicalities and indeed into areas which you do not understand yourself. Well, I suppose that is part of the art in being a good speaker.
I said something similar in a discussion about CFD and Finite Element codes on here recently. No-one will recode Fluent or LS-DYNA in Julia. They will still be around when we all retire. It is when a new group forms to write a new set of software to do XYZ that we will see commercial Julia codes.
I would do more or less the following:
Step 3 above is key, and will take most them your time. It would estimate that preparing a 1 hour talk along these lines could take a week (not full time, but letting the problem rest and coming back to it). Making a 30 min talk would take even more time.
I expect that anything less than this is pretty useless: if you donât know their domain then you will have hard time convincing them to listen to you, and unless you present a nice solution to a relevant problem then the case for Julia will remain unconvincing. And even if you do this, donât expect people to switch instantly, just to explore the language.
Just make sure you understand them first
In a machine learning class last semester, I was allowed to spend four classes teaching/presenting Julia to my fellow students. By the last day we got maybe a third of the way through the day 1 material.
When people arenât programmers, or their experiences are only with R, it is very easy to get bogged down with discussions and extra examples of things like multiple dispatch.
It also didnât help that we spent all of day 1 just installing and setting up Julia. Not just using Julia-Pro was a mistake. At that time, I told people to use VSCode as I hadnât tried Juno in a while.
It amazes me how much time can fly when just trying to get everyone set up â even in a class with only 4 other students and the professor.
I had to finish with one of the students after class who did not set the path to the Julia executable within VSCode correctly.
None of them have used Julia since. One of their advisers was interested in it, but concerned that Julia would be bought by another company and adopt a liscensed subscription model (really?), and that student encouraged or was happy because they didnât want to use Julia. They were happy with R and SAS.
Evangelizing is hard. Itâd probably be hard for me NOT to do it, but I think just focusing on being productive and helpful if others need it is a better bet.