Is Julia Falling Behind in Relevance? (because it's not used in LLM research?)

There is stuff in Julia, for example:

I do not like to limit my inquiry [I have lately explored building LLM - type models for jazz improvisational styles]

1 Like

Thank you for this. I did see this, but am not at the level to be able to make use of this yet - maybe as I get more knowledgeable.

1 Like

I feel like this thread is going to go on for a very long time but not get much beyond three true observations:

  1. Julia is not the dominant language for deep learning and is unlikely to become it soon. Those who want to work in deep learning are often better off using Python, the dominant language.
  2. The ability to make use in language L of capabilities built using deep learning does not depend on language L being the dominant language for deep learning.
  3. There is interesting programming work outside of deep learning that Julia is contributing to.
22 Likes
3 Likes

This does not follow. Consider the following:

  1. at the moment most people who are not programmers interact with computers via some GUI,
  2. Julia could not be described as having a comparative advantage in writing GUIs, even by people who otherwise love the language,
  3. yet Julia widely used for scientific computing.

You are confusing user interfaces with programming languages. Some languages have a comparative advantage in implementing user interfaces, but this itself is a relatively niche area and mostly taken over by “toolkits”, to which languages can bind to.

6 Likes

The applications of Large Language Models are impressive but I feel that the impact of this field on global energy consumption deserves to be seriously questioned. We are no longer talking about a few 10^4 (10^5?) HPC simulation designers/users who can, through their applications (e.g. Climate Simulation) easily find their justification. We are talking about 10^9 humans who can use computationally intensive means (GPU/TPU) on a daily basis for subtitling or automatic translation of a youtube video (for example).

This is all very fun (like everyone else I played with copilot and so on) but it seemed to me that the global energy consumption was a primary issue that should form the framework of our judgement on the emergence of this or that innovation, as amazing as it is.

To stay in the realm of amazement: I am literally flabbergasted that one can imagine that the field of Large Language Models is close to a hegemonic situation in the whole of scientific research.

Translated by a famous LLMs engine from French :wink:

Les applications des Grands Modèles de Langage sont impressionnantes mais j’ai le sentiment que l’impact de ce domaine sur la consommation énergétique mondiale mérite d’être sérieusement questionné. On ne parle plus ici de quelques 10^4 (10^5 ?) concepteurs/utilisateurs de simulation HPC qui peuvent, par leur applications (p.ex. Simulation du Climat) trouver facilement leur justification. On parle ici de 10^9 humains qui peuvent utiliser quotidiennement des moyens de calcul intensifs (GPU/TPU) pour un sous-titrage ou une traduction automatique d’une vidéo youtube (par exemple).

Tout ça est très fun (comme tout le monde j’ai joué avec copilot et consorts) mais il m’a semblé que la consommation énergétique globale était un enjeu premier qui devrait former le cadre de notre jugement sur l’émergence de telle ou telle innovation aussi stupéfiante soit-elle.

Pour rester dans le domaine de la stupéfaction : je suis littéralement sidéré qu’on puisse imaginer que le domaine des Grands Modèles de Langage soit proche d’une situation hégémonique dans l’ensemble de la recherche scientifique.

9 Likes

Can someone give an estimate on how much energy your translation/post has consumed? Like in “typical energy used for km with a car?”, some unit I can easily comprehend even if it is very inaccurate?
Serious question!

2 Likes

I am clearly not a specialist of this topic (but I do use GPUs). Anyway “Large Model” is meaningful and this implies to access a large number of data for inference (the training is probably negligible).

I am pretty sure that this forum can provide a good estimate.

A first link from google.

6 Likes

You may be interested in what a couple other prominent AI researchers think about LLMs. In this video of a conversation between Yann LeCun and Andrew Ng, Yann says

We as humans are very language oriented. We think that when something is fluent it is also intelligent, but that’s not true and those systems have a very superficial understanding of reality.

An “understanding of reality” requires a mental model of the world. As far as I can tell, Julia would be good at that sort of modeling.

4 Likes

Are LLMs the new kid on the blockchain?

In all seriousness, LLMs have a lot of promise in terms of usefulness. However, it seems largely irrelevant in which language they are written. Plus, one could argue they are barely written in Python anyway. There’s a lot of movement in the open source community regarding models. But I think, at some point they will be a commodity.

6 Likes

The consumption is not so bad if you look at what a human requires for say 30 minutes of work. For that work, a person needs transport, previous education, communication, entertainment and sometimes medical support.

More generally, I think there is a bright future ahead for computing. Just think about it: what if everything around us is 10 times better software wise? It’s likely that it will happen because just go back 10 years in time and you would never have imagined how things look nowadays. Also, Moore’s law and Dennard scaling slowed down, so software engineers will have to work harder to keep performance improvements going. Julia has a nice place for performance related research since there is a lot of flexibility for compiler-related work (e.g., Julia’s SIMD implementation which was just bolted on if I understand correctly).

Will all this work be done by LLVM’s some day? Yes maybe some day. However not today yet.

1 Like

4 Likes

Yes it’s easy to point out the bad sides, but there are also many good sides. Things have gotten much better in transport, medicine and entertainment. We can now learn arcane topics via YouTube or have a smooth video calls across the world for example. Whether one wastes their life on social media is a choice.

1 Like

The fact that people have chosen to mock and ridicule me for asking a simple question on here is unfortunate, and telling. I may be wrong, but I don’t think the surprising impact LLM’s have had in the past six months is a ‘flash in the pan’. [I remember when people mocked Deep Learning as ‘hype’]
I think that if Julia developers fail to make a concerted effort to catch up in this area, Julia could end up as a mere novelty footnote in the annals of computing history (akin to APL - elegant, but ephemeral).

@compleat I see very little hint of mockery or ridicule here besides one sarcastic quip. What I see is quite a bit of earnest engagement to a fairly provocative post. Please read folks’ writings here assuming the best of intentions here — and if you see something that rubs you the wrong way, please flag it for moderator attention.

5 Likes

If that was me, sorry, I was just taking the thread with ease. I do think though that these long term anguish about the future of languages is a bit exaggerated. If we are using Julia now it is because, now, it is useful and pleasant.

11 Likes

If I may say something about this upon re-reading. Irregardless of whether Julia is the right language, learning one language made me better at other languages. But yes, I also sometimes jealously look at the number of stars at some new shiny Python package.

Well there was the quip about how we’re always hearing flat screen TVs are around the corner and they never arrive. That could be interpreted as a bit of a jab, but it was a point in your favor. Sarcasm can be concise and effective, but unfortunately also offensive. In any case, aside from the title that seemed to generalize too much, the question of the relevance of Julia in light of the public LLMs is legitimate, and not unreasonable to raise.

Thank you [noted about the possibly ‘over-generalised’ title]