Edit for future readers, the respones of @ChrisRackauckas in this thread, together with the response of @tim.holy here (which I missed) gives a clear enough plan.
The following gives an explanation as to why roadmaps are difficult.
TLDR: Look at the bold stuff.
I hope to have some discussion here possibly with members from the Julia Core team of what the future will bring for Julia. In my opinion, there is a lack of transparency after concerns raised in for example the mojo thread. I believe a timeline of features to tackle current problems would help the community much.
The main competitors of Julia right now are the deep learning DSLâs like Pytorch
which are often used outside of deep learning just because they easily give performant code. IMO, Why are scientific programmers not using Julia instead?
I think that Julia only solves the 2 language problem, for those few that know how to write âmini-compilersâ for memory management of arrays.
In the prime example of high performance computing, DifferentialEquations.jl, all required memory is cached beforehand like here. Once differentiation is required, it requires 400 lines of code just to get forward-mode AD with memory management to work efficiently. Not to mention that they had to develop a new datatype.
This approach takes too much time, is too hard for most developers and doesnât scale to more heavy array calculations like in Machine learning. I would love to know if Julia will tackle this and how and when.
The difference with Pytorch
which doesnât have these problems is that Julia arrays donât know their shape at compile-time and hence automatic memory management of arrays isnât possible. I think having no separation between lists
and arrays
in Julia is a mistake, they should share a supertype though. This would be solved by adding arrays with known shape to Base and writing automatic array memory management for them and having AD work well with them. I Think MArray
s from StaticArrays.jl
is a good starting point. But with a name like ShapedArrays
for example.
With this, it would be possible to statically compile most type stable code without shipping a GC, since most type stable functions only allocate due to Arrays. Moreover, compilation to MLIR, GPU and others should be doable.
I really believe that this approach is the right one for Julia since it is doable and since the requirement of having known array sizes in functions is fair for static compilation. I know I come over as entitled but I really love the work that has been going on in Julia, and I would love for Julia to become even more widespread. Am I too entitled to ask this? What is the opinion on this by the community?