Hey @jreileyclark – I am happy to give a video call to you about this. Rather than money, could you help me make a blog about the process I’ll show you and then we could put it up somewhere together? My email is jacobszelko@gmail.com if you want to continue the conversation.
I think that would be a brilliant idea and an actionable outcome here from @jreileyclark and I chatting! And awesome – Justin, just email me and we can figure out times that work for us to chat.
Also, at some point @gdalle , I need to send you a list of blog posts I have written over the past to see if we can salvage anything I have written and turn it into a workflow within your workflows project (huge fan).
P.S. Guillaume, we should chat at JuliaCon if you are there this year! Would love to commiserate over ideas here.
@jreileyclark I found your take really relevant, and the sentence above stuck with me. This approach of “programming as a way to communicate concepts” is exactly what we’ve been trying to do with Alan Edelman for the JuliaComputation class. You’re right, Julia allows us a level of clarity that other languages don’t, and teaching is a great way to showcase it. I am very proud of some of the homeworks we wrote in this spirit (like the Pokémon one, or the one on atmospheric temperatures), and if I had dedicated funding I would definitely churn out more such content…
… which brings me to my main pain point. The Julia community is overwhelmingly academic, and in our line of work, I feel like such outreach initiatives are not considered worthy of spending time.
I’d be curious if anyone has suggestions of grants or other support one could apply for, if the goal is to write code or documentation that has pedagogical rather than scientific purpose.
To make Julia “popular” one has to be able to target one of the popular domains and no Scientific Computing and writing simulators doesn’t count. Julia has certain strengths and is very fun to code in so I would obviously want to be able to do more projects in it.
What are these possible domains?
Machine Learning (heavy loads)
Training: I spent a lot of time trying to use Flux for heavy loads and gave up and don’t recommend it to anyone anymore because both PyTorch 2.0 and JAX perform 2x for training certain models. There is no consensus on how to do training on multiple GPUs/nodes (once again a rando package with a CV model showing a demo doesn’t count). I doubt anything similar to FSDP exists or is possible? If we want to train even models that take a couple of hours to train (assuming they fit on a single GPU) a 2 hour training time vs 1 hour on other frameworks will never be acceptable.
Inference: Suffers from the same reasons. In addition to that libs like ggml made it possible to run things fast on the GPU. As per my understanding this could have been done in Julia but we don’t have support for precision that are not multiples of 8 even though LLVM supports it. I can’t find the post on this forum related to this now but all I saw was someone requesting in and another person just berating them (The post is quite old before ggml etc. happened)
Possible fixes: Flux/Lux support XLA backend as a start (to support kernel fusion and related opti). Stable implementation of multi node training in the libs itself.
Distributed Systems/Backend
Better learning materials for Dagger etc.
Benchmarks with Go/Python frameworks
Porting MIT Distributed Systems Course Assignments to Julia should be interesting.
Systems Programming - Zig is the new hot thing now. My knowledge is limited but can we write good system programming code in Julia?
Please forgive me for complaining so hard but I am practically heartbroken wrt the 1st point.
Well, this is multiple dispatch. People often suggest restricting generics to make them throw error but what about we make them… just work? Or if not possible, throw nice error via multiple dispatch but mostly just make it works. Maybe use more function namespace?
And we can actually use this interface pattern…
interface(x) = error(“Interface not implemented”) #information here.
interface(x::something) = nothing
And we can check interface just by calling the function.
Anecdotally speaking, the Yuri’s blog post comes up frequently. And also from the same thread here is another negative take, although I don’t know the context of the implemented method he is speaking of(but in general, he is talking about working with Julia both in research and production of a high frequency trading system):
I suspect the reason that specific post comes up so frequently is because it is a publicly accessible and concrete list of examples without having to compile a new list—but I also suspect that anyone who has worked with Julia long enough has encountered some bugs that one would not expect to find in other peer languages popularly considered “robust.”
Often this is due to functionality/composability not present in other languages that Julia attempted to add. One example I recently came across is Iterators.reverse(::Iterators.Filter). In e.g. Python, one will get TypeError: 'filter' object is not reversible, and in e.g. C++, Rust, any reversed iterator must be explicitly double-ended. Julia, on the other hand, will happily hand you an (incorrect in some cases) iterator. So more functionality, kinda, but the answer can be wrong
I know this is a sample of one, but I personally value this person’s opinion a lot, since my interests align with his line of work and he is much more experienced than me.
Something like what Chris is working on. Basically a grantee that a loop happens under a certain threshold with predictable behavior. We can achieve real-time already with preallocating/no allocations in hot loop/… but still we have no direct/deterministic control over allocation.
It would be really a good thing if Julia garbage collector had the ability to set an upper bound on its duration. The new garbage collector is multithreaded in 1.10 if it could run in its own thread until it absolutely needed to pause everything, then have the pause be at most some duration, say 100us or something that’d go a long way towards extending the interactive space Julia could support.
So, more static support, and better control over allocations? But I don’t understand the term “real time mode” in this context. What does this have to do with “real time”? And how would this be a “mode”?
I’m not trying to be difficult, I just don’t understand the terminology.
“Real-time systems” like robots or live audio systems need to respond to signals within 1ms or something. They can’t do that if the compiler or garbage collector etc interrupts them with delays that exceed their time budget.
Some OS, e.g. Linux can be compiled to run in “real time mode”. This sacrifices some average performance in favour of hard guaranties within which time high priority threads are served if they become active.
Well “Mode” is just a figure of speech for (as an example) a subset of Julia that let’s you overcome challenges as mentioned in above comments that the current tooling/features does not let you, the downside would be you probably will lose some default language features. StaticCompiler.jl is a good example of this, it let’s you create tiny binaries with full manual memory allocation, but you can not use it with anything that allocates (tracked with GC).