I guess the question is “what kind of vulnerability”? Looking for vulnerabilities in languages historically had to do with buffer overflows and the like, with other vulnerabilities attributed to the language most often being targeted at a specific application written in that language. I’m not aware of filed CVEs or similar though (and julia being a garbage collected language it’s unlikely to be the most prevalent/pressing vulnerability) and it’s unclear what a CVE in the context of machine learning would look lile (e.g does the use of a machine learning algorithm in and of itself lead to a “vulnerability”, since they’re bound to have some uncertainty in classification and being “vulnerable” to crafted inputs? What would make julia different here from using python for the same algorithm, if the vulnerability is inherent to the algorithm and not the language?)
There is some similar discussion here:
I’m happy to discuss/elaborate in more detail, though I imagine it’ll be of limited importance to the DoD, who, as is often the case with institutions, as an organization does not understand that such broad classification in “good” and “bad” doesn’t necessarily work here. It’s more nuanced than that.
That being said, a security audit (whatever form that may take for a language itself, most probably GC and compiler) would be appreciated.