Is Julia Falling Behind in Relevance? (because it's not used in LLM research?)

Like many of us in the Machine Learning/Data Science community, I have been blindsided by the tsunami of developments in Large Language Models (LLM’s) in the past 6 month, not realising how fundamentally these advances would redefine ‘what is possible’ in just about any area of human endeavour. Similarly, it is looking to me as well as if Julia itself may have been blindsided, as it seems to make no showing at all in the latest round of innovations and developments in the LLM space.

Over the past several years, I have been (happily) moving more and more of my work to the Julia ecosystem, but I see that plan has suddenly been dropped on its head, as there seem to be no serious development tools or packages for LLM’s in Julia (that I know of), nor any attempt to address this gaping hole (please correct me if I ma wrong?). Suddenly, I have had to refresh myself on my Python programming skills, and update myself on its latest developments and conventions, and completely re-engage in that World, as it seems clear to me that in the coming years, there will be no escaping it, no matter how proficient a user and programmer of Julia that I may become.

Or perhaps I am missing something obvious? [I truly hope that I am!]

1 Like

The language is a tool. The goal sets the tool to use.
Road races aren’t won by bikes, but the athletes. You won’t choose a road bike to win a MTB race.

Of course Julia isn’t best in all disciplines. Neither are all the others. Obvious.
I wouldn’t choose Julia to enhance my CMS, I choose PHP.

Does it make sense to say, Julia looses therefor relevance? The answer is “yes, relevance doesn’t increase so it decreases” but it doesn’t make much sense. Does a hammer looses relevance because you choose a saw for the wood?

LLM’s aren’t implemented in Python. There are surely some more languages involved. Does this mean that Pythons relevance is somehow questionable?

It’s not Python which has a hype, it’s the new quality of how machines can interact with us. No one thinks “wow, Python must be so powerful”.

If you want to be a master in fighting, you have to master MMA, not only a single style. Same with programming.

Still, I like some languages more than others. I still like Julia more than Python, despite that those LLMs amaze me very much. The power of Frodeno (triathlete) amazes me too, but I do not know which bike he uses (well, of course, others do and buy the same, still they fail doing an ironman).

This is great, embrace it.
And when the next hype kicks the relevance of something else, embrace it too.
Someday you will see a bigger world instead of many different small worlds and you will easily walk through it and learning and refreshing skills will be like breathing fresh air.


If I had a nickel for every time we heard this about AI research since the 1950s… I would probably still have something less than 10 dollars, but it would definitely be enough for a nice ice cream.


I wasn’t referring to wider AI, I was referring to LLM’s in particular. What I am talking about appears to have even taken Hinton by surprise.

Could you clarify whether you mean Julia is falling behind in building LLMs (i.e. the work done by OpenAI, Google etc.) or in being used with LLMs (i.e. things like co-pilot, getting a chatbot to write code for you etc.).

If you’re talking about the former I think you’re right - if you want to build LLMs to compete with OpenAI then Julia might not be the right thing for you for lack of ecosystem and community in that space.

If you’re talking about the latter then that’s evolving, people seem to report varying successes in using LLMs with Julia, and there were some discussions on here about how the community could help ChatGPT and friends to improve their Julia skills.

The title is wrong, it doesn’t mention LLMs. It’s assuming that the entire world and all research is LLMs. That’s not the case. In fact, LLMs are a tiny part of what’s researched, and a tiny part of what’s relevant. It’s one hip part of research with very small coverage.


And if I had a nickel for every we heard that flat-screen televisions were coming ‘within a few years’, I’d also have a lot of nickels.


Indeed, this is what I am referring to. I think LLM research will be increasingly distributed rather than centralised, and integrated with everything, and I think Julia users could be largely left out of this as things stand.

I defend the title - I think that through neglecting the importance of this area (which is now arguably showing itself to be much more than ‘niche’), Julia could be seriously jeopardising its relevance to the challenges of the future. I wouldn’t want to see that happen - hence this post.


Climate research is a pretty big deal and those codes don’t seem to be written by LLMs. Biology research had one of the biggest revolutions during the 2010’s and its newest bioinformatics binaries don’t seem to be written using LLMs. Integrated electrical vehicle research is huge and it’s pretty much all dynamical modeling tools (Simulink, Dymola, ModelingToolkit, …). I can keep going. The world is huge, there’s a lot going on and only a small fraction of that is LLMs. Yes, someone should do something with LLMs. However, reducing the entire complexity of today’s research to LLMs is a complete misunderstanding of what they are and what is going on across many industries.


You are not wrong, but what you present here are use cases for very centralised research efforts. Most of the users of Julia will be outside these efforts, and I think will be left out. So maybe Julia will become the language for Weather & Systems Biology Labs. and smaller users de-prioritised?

There’s thousands of research areas like this that need thousands of people centralizing the research efforts. Some areas are doing well, others not as much. LLMs is one area.


Note that using LLMs/embeddings/… from Julia is barely different than using them from Python. Thanks to PyCall or PythonCall, syntax for calling python libraries from julia feels basically native. PythonCall also includes a sane julia-like way to manage python dependencies, that’s arguably better than common python tools themselves.
I don’t remembed any julia-specific issues when using huggingface transformers or segment-anything from Julia. And there is a julia package for openai LLMs.


I am not handy enough with low-level research in LLM’s to fully understand the details of what you say, but from the sound of it, this looks like it could be a constructive step in the right direction.

This is just for deployment of existing models, not building, training, or extending.

Thank you for clarifying although if this is the case I don’t quite understand how that is connected to

From our previous interactions I seem to recall that you are an economics/finance academic using Julia for research and teaching purposes? I don’t really see how other people building LLMs in other languages has any impact on that?

I suppose one could argue that LLMs will become an important research and development area in future and the language(s) that this research is done in will therefore see an increase in users and developers, with spillover benefits to people using the language for other use cases. But in this case I’m not convinced that LLMs are a large enough area of development (as opposed to, say, general data science, engineering applications, web dev etc etc).


I believe that LLMs will become the dominant way in which humans interact with computers in the future (I believe Jeremy Howard foreshadowed something like this some years ago, perhaps not quite using the same terminology) and if one agrees, then they need to become part of the ‘fabric’ of any platform. To prevaricate on this (imo) could end up leaving Julia in the ‘niche’ category (e.g., Weather and Systems Biology, as mentioned earlier).


This means that we won’t need any programming languages anymore. Perhaps this can be called an evolution.

1 Like

In the future we’ll all be dead.


Okay this is slightly different from either option 1 or 2 in my previous post - you are saying LLMs can only be successful for languages that they have been written in. I would disagree with this assertion. I have no idea what languages LLMs are actually being developed in, but I’m pretty sure the heavy lifting will be done by some sort of optimized C/C++ kernels and their development is pretty far removed from the application of the LLM.

As far as I can tell currently the quality of LLM interactions when coding is mainly a function of (i) how much public code there is out there for a language and (ii) how much a language and relevant libraries have changed since the LLM training data set ends.

1 Like