"Homomorphic machine learning" in Haskell; goal to be more generic and faster than Julia

https://github.com/mikeizbicki/HLearn

" HLearn is also a research project. The research goal is to discover the “best possible” interface for machine learning. This involves two competing demands: The library should be as fast as low-level libraries written in C/C++/Fortran/Assembly; but it should be as flexible as libraries written in high level languages like Python/R/Matlab. Julia is making amazing progress in this direction, but HLearn is more ambitious. In particular, HLearn’s goal is to be faster than the low level languages and more flexible than the high level languages."

Looks interesting. I couldn’t tell but maybe has some lessons for flux etc

1 Like

The performance of Haskell compiled with GHC tends to exceed, e.g., gcc, only when there is some particular optimization path that GHC can exploit that gcc doesn’t. So no novel lessons for Julia there; I think the compiler team already knows they can use type inference to apply novel optimizations to LLVM and the IRs above it.

1 Like

Yes but what about the novel ML abstractions?

It doesn’t seem to be maintained anymore?

I think an issue could be that if you try to do something really sophisticated to get an extra few percent of speed you may lose on the side of not having many people who can help out… I don’t know if that’s the case here, but if you take flux the code is mostly very nice to read which makes it somewhat easier to contribute to

The author explains a bit about why it’s not being developed anymore.

https://news.ycombinator.com/item?id=14409595

" Hi everyone, author of HLearn here :slight_smile:

This is a bit awkward for me as I’ve paused the development of HLearn and emphatically do not recommend anyone use it. The main problem is that Haskell (which I otherwise love) has poor support for numerical computing."

later, on why the haskell type system doesn’t do what he wants:

“For example, I want the compiler to automatically rewrite my code to be much more efficient and numerically stable”

and

“t’s common in machine learning to define a parameter space \Theta that is a subset of Euclidean space with a number of constraints. For a simple example, Theta could be an elipse embedded in R^2. In existing Haskell, it is easy to make R^2 correspond to a a type, and then do automatic differentiation (i.e. backpropagation) over the space to learn the model. If, however, I want to learn over \Theta instead, then I need to completely rewrite all my code. In my ideal language, it would be easy to define complex types like \Theta that are subtypes of \R^2, and have all my existing code automatically work on this constrained parameter space.”

Sounds like Julia hits most of these notes, no?

10 Likes

To put things on a time line:

  1. That README was last updated April 2016
  2. Zygote’s first tagged release was March 2017.
  3. The HN comment was May 2017.
  4. On Machine Learning and Programming Languages was published December 2017.
  5. Julia 1.0 and Cassette first released in August 2018.
  6. Building a Language and Compiler on Machine Learning was published December 2018.

A lot of exciting things have happened (and plans written about) since the quote in the opening post was written!
Swift’s developments also happened since then (and Swift didn’t even get a mention at the time).

7 Likes