I know e.g. ALBERTA, a variant of BERT, has been recreated in Julia, and AlphaZero, but all the (recent) papers I read, I see are still based on Python code, if the language is known to me (e.g. GPT-2 and updated GPT-3). [SciML, merging neural networks and PDEs, maybe the the exception.] What (other) counterexamples are there?
Ok, often I’m not sure what language is used, e.g. for interesting paper from July:
and: Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense
https://arxiv.org/pdf/2004.09044.pdf
My hope was that, as we need, and are making newer algorithms all the time, obsoleting older ones, people would use Julia, and Python would simply get outdated that way.
Good blog:
This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.
the AI-generated blog’s popularity on tech-focused Hacker News forum as a measure that GPT-3 had managed to fool readers that a human had written the posts.
A post written by GPT-3 made it to the top of the Hacker News forum
The post has indeed received 198 points and 71 comments. “What most commenters didn’t realize: The post was generated entirely by artificial intelligence,” Business Insider wrote.
EDIT: I did find from the main guy behind Knet.jl (but the paper doesn’t mention if Julia or other language used):
Abstract: We present BiLingUNet, a state-of-the-art model for image segmentation using referring expressions. BiLingUNet uses language to customize visual filters and outperforms approaches that concatenate a linguistic representation to the visual input. […]