The Matthew Effect of AI Programming Assistants: A Hidden Bias in Software Evolution
It would be great if you could add some more information instead of purely dumping a link into the forum. Especially since you posted in Community it would be great to see some Julia relevance.
It seems like they’ve done some Julia comparison. But also to me the overall finding of the paper is not so surprising?
It sounds like it must be true, but it’s not actually my experience - LLMs are really good with Julia and quickly help even with some of my more nieche Julia packages, once they read the docs.
@ChrisRackauckas actually backs this up with some data:
I have the feeling that it might as well go the other way around, since it’s suddenly pretty easy to translate fairly complex functionality from other languages to e.g. Julia and fill some of the gaps that are still missing in the ecosystem.
Also, it’s way easier to learn a new Language with LLMs, so I think people will be able to chose more freely, which could lead to popularity having less of an impact compared to actual usability and performance of a language going forward.
The economics of LLMs are broken, so I don’t think the a language’s vibe-code-ability is a metric that will matter in the long run.
I think more targeted use cases where the scope is compatible with a locally running LLMs like what @sdanisch shared will have more staying power than fully autonomous agents.