On Machine Learning and Programming Languages

New blog post on the Julia website

17 Likes

A lot of effort went into this post to make sure its not just about Julia, but really reviewing the developments in machine learning and asking the question in an unbiased way.

I personally think that many ML frameworks are slowly becoming more like programming languages, and something like Julia + native AD + Flux/KNet + GPU / other hardware codegen makes for a really compelling ML platform.

Would be great if folks could try out what we have, and see if it does in fact address the issues raised in this blog. Assuming all goes well, and we can complete the work, it would be good to develop this and submit it to an appropriate conference/journal.

(I also changed the tag of this post to Machine Learning)

-viral

11 Likes

We saw many hits from China on this post. I wonder if we should do a Chinese translation and host it on julialang.org as well as send it to some Chinese tech outlets. If someone is up for this, I’d be happy to help.

-viral

1 Like

Nice post. The main problem is the lack of users and Big companies really pushing libraries for the language. Python has a lot of great engineers from Google and Facebook helping to build Tensorflow and Pytorch. Julia is not in the same situation.

And yet when I look at the domains where Julia development has focused, it really blows my mind how well Julia holds up against python’s libraries. Comparing DifferentialEquations.jl to offerings in Scipy is a night and day difference, yet DifferentialEquations.jl is mostly the work of one developer.

Thats insane! It really seems to be that the work of a handful of highly productive people in Julia over a couple years is a fair match for the work of entire industries in Python over a decade. This isn’t because Julia devs are gods of programming (though it seems that some are close), its because the language is cooperating with them instead of fighting at every turn.

13 Likes

You mean Julia Computing does not count as big as Google and Facebook? Just
kidding. :wink: At one level, we can’t stop doing what we are doing just
because we don’t have big companies behind us. But we definitely need to be
smart and leverage our advantages. Also, I think that Google is invested in
TensorFlow and Facebook in Torch - not necessarily in the wrapping
language. Both of these are compilers in their own rights.

At some level, if all of us thought that big companies are necessary - we
wouldn’t have started Julia or even got this far as a community.

-viral

9 Likes

I totally agree with you observation. There is actually a lot of space to explore. For example, there is no deep learning framework that allow us to use AMD GPUs. If Julia is the first language to solve this problem it might motivate a lot of people to try the language (HBM2 AMD GPUS for 600$ vs 10000$ of the competition).

3 Likes

Flux.jl with CLArrays.jl?

4 Likes

Well work is certainly afoot:

https://github.com/JuliaLang/julia/pull/24142

4 Likes

It might be a solution but the best thing would be to something similar to cudanative as an alternative to write opencl like code. Without the need of Transpiler.jl.

You posted it seconds before I wrote my comment! I didn’t know about this! AMD/Intel where are you? Put a couple of engineers working on this!

That’s in the works.

My suspicious is that Chris cloned himself a few times. Even with Julia productive as it is, that is the only possible explanation!

15 Likes

The macro @spawnatcrhis is in the works.

7 Likes

Well, the code you write in CLArrays is pretty much identical to what you would write for CUDAnative :wink:

I’d love to have people write more hardware independent code with GPUArrays anyways, which should again almost look identical to Julia code for CUDAnative - biggest difference is, that it will just run on all platforms.

But sure, an LLVM based approach would be nicer for the future!

We should write a Flux model to train on Chris.

5 Likes

Having a library for DNN with AMD / Intel GPU support will indeed be a game changer.

I really don’t understand why Intel / AMD aren’t pushing it harder.

By the way, there is one project, though limited to Intel hardware - Intel MKL DNN. Though I’m not sure if one day they will use Intel GPU. Maybe…

I can help with the translation. Where should the texts go?

If you’re comfortable with GitHub, I think we can prepare a new blog post in Chinese right there. If not, send me a google doc or anything you prefer, and I can prepare the blog post myself.

-viral

github is okay for me. What’s the desired format, latex? or markdown?

edit: if latex, could the source file be public on the repo?