You may or may not agree, but Julia is found lacking here:
Julia Still Not Grown Up Enough to Ride Exascale Train - The Next Platform
Both the blog post and the preprint actually seem quite a bit more positive than I would have expected given the title.
It is a bit tongue-in-cheek on my part, for sure.
One thing worth noting is that the performance issues in the paper come from GPU code where the authors explicitly say that the LLVM code looks good, so itâs unclear if anyone is capable of generating good code for this system other than just calling the AMD primitives.
I agree. Note that the title of the preprint (âJulia as a unifying end-to-end workflow language
on the Frontier exascale systemâ) is much more positive than the one of the blog post.
Also, weâre talking exascale here. While itâs great that people are considering Julia for this, Iâm already happy if we can play a good role in the âregular HPCâ world.
I am very interested in the very short section on Jupyter notebook tests. I had expected a lot more given the introduction at the start of the article.
I think this statement tells us quite a lot about the potential of Julia
âThis instance demonstrates Juliaâs potential as a unified language that can seamlessly connect different stages of an HPC workflow, from computational simulation to data
visualizationâ
I am going to go out on a limb here - traditional HPC workflows are import model / create a mesh / run the solver / run a post-processing program to produce results/pictures/movies
Then users either use Visualisation nodes remotely or indeed copy results files to storage which is accessible by Windows desktops/laptops.
If we can see that we can use the same âtoolkitâ for visualisation then that is a great plus for Julia.