As Julia approaches the release date of 1.0 we should think of how to spread the word of how fantastic this project is.
I am at the moment a member of the Editorial Board of the Journal
To save you some searching, here is the list of topics that form the broader scope of this journal:
The scope of the journal includes:
• Innovative computational strategies and numerical algorithms for large-scale engineering problems
• Analysis and simulation techniques and systems
• Model and mesh generation
• Control of the accuracy, stability and efficiency of computational process
• Exploitation of new computing environments (eg distributed hetergeneous and collaborative computing)
• Advanced visualization techniques, virtual environments and prototyping
• Applications of AI, knowledge-based systems, computational intelligence, including fuzzy logic, neural networks and evolutionary computations
• Application of object-oriented technology to engineering problems
• Intelligent human computer interfaces
• Design automation, multidisciplinary design and optimization
• CAD, CAE and integrated process and product development systems
• Quality and reliability
I would like to propose a Special Issue. There are several projects I am aware of that would be excellent candidates for technical articles, but my view is fairly limited by my own interests. I think it at this point best to poll the community: would anyone be interested in contributing?
I would also like to hear from the lead developers on what the best timing for this might be.
I would also gladly contribute, if possible, but I do not know if I satisfy the necessary criteria.
I have made this package DynamicalSystems.jl which offers a great amount of tools used in the research of nonlinear systems, chaos, etc. I think as Julia gets towards 1.0 and becomes super-famous, many people in my field will benefit.
To be honest I haven’t found any other packages that combine that many features into one, flexible and high level interface. This is actually the reason I started writing it. Also, that statement is not only for Julia, I meant that in general I haven’t found any free & open-sourced libraries that do that for any language.
There are some packages here and there that e.g. only calculate Lyapunov exponents or only some other quantity. The only one I am aware of that has support for many features is TSTools for Matlab, but support and maintenance of the package has stopped many years ago.
The problem is though, that all the algorithms I use in my package are not of my own; they are well-known algorithms from well-known papers. So I do not know how I fit to contribute here…
@Tamas_Papp, @davidanthoff, @viralbshah: When you have a chance could you think of how to fit your selected topic into the general scope of the journal and let me know? If you’d like, get in touch privately.Thanks!
I think what you stated in the 3rd paragraph would be a pretty good reason for publishing that as an article: Julia made it possible to design the package with an interface that couldn’t be done with any other language!
I would be interested in submitting an article on my benchmarks of a simple 1d nonlinear PDE simulation (the Kuramoto-Sivashinky) in Matlab, Python, Julia, Fortran, C, and C++. It would be a four or five page thing. The algorithm and codes are super-simple, but it gets the point across that Julia is as easy as Matlab and as fast as Fortran, with good potential for classic HPC problems.
The problem with that benchmark is that it is mostly measuring FFT time and vector operations; as a result, even Python and Matlab are within roughly a factor of two of C. Problems that spend most of their time in library code are not the most revealing way to compare languages.
I would also be interested in submitting if possible.
Would submissions be open to the community?
I develop NODAL.jl, a Julia package for general-purpose program autotuning that uses stochastic search techniques and design of experiments. I work with autotuning the CUDA compiler and High-Level Synthesis for FPGAs, but this autotuning approach is applicable to many HPC domains.
I think there are three main things being measured in that benchmark: FFT time, vectorized arithmetic, and allocation of temporary vectors in the inner loops. The point of the benchmark is that, even though all the codes are calling FFTW and presumably have that same fast C code for vectorized arithmetic, Julia can eliminate the temporaries with very little overhead in code complexity and get right down to the speed of hand-tuned Fortran and C. A factor of 2 is still a big deal for supercomputer applications.
Admittedly it’s a toy problem and not a big parallelized code. But people don’t want to budge from Fortran because for every benefit of a modern language, they expect a necessary performance penalty. This benchmark is meant to show that expectation is wrong, and that it’s plausible that a big parallel PDE code in Julia would be entirely competitve with Fortran or C.
You are indisputably right, a portion of the work is in this benchmark handed off to something that’s actually shared among the various languages. On the other hand, it is also true that that is not uncommon: if you take any finite element code, a big chunk of the time getting a solution will be spent on solving the system of equations (let’s say for simplicity linear algebraic equations). This work is also farmed out to an external component. In addition, a number of operations will be crunched in LAPACK or blas libraries. Coming up with nontrivial benchmark problems which are totally executed within only the set of compared languages and nothing else is hard.
Hi @phrb: This is an academic journal, anyone can submit to a special issue: it is not by invitation only. It will still need to go through the regular peer review. If you (or anyone else) our thinking about submitting something it would be good to consider it in the context of the scope of the journal and the academic character of it.
What you described sounds great, so flesh it out and you will see where it goes.
Finite element codes are arguably much more interesting than this, because they are filled with time-consuming tasks that vectorize poorly. For example, a big chunk of the time is also spent on matrix assembly, which would be so crazy slow if written in pure Python that codes like FeNiCs have to resort to extensive C++ code generation to be performant. Or consider interpolation from one mesh to another. Or adaptive mesh refinement. Etcetera. As a result, production FEM codes require a lot of two-language coding (high-level + low-level) if done from Python or Matlab.