Translation of traditional ML models to neural network framework for acceleration?

Hi,

This is maybe a stupid question for which the answer is already in the docs of one of the ML / NN frameworks, but are there efforts within the Julia community to do what the Hummingbird python package is attempting to do - provide a translation of more traditional models to a neural network framework to gain acceleration?

Fredrik

I believe that the ML ecosystem in Julia is still maturing, both traditional models and tensor-based differentiable models.

If you have the skills to contribute, take a look at some of the organizations:

JuliaML

This organization hosts various packages maintained by the community to do ML without a specific framework. I am mostly involved with the development of TableTransforms.jl, LossFunctions.jl and StatsLearnModels.jl, feel free to reach out.

FluxML

This organization hosts the Flux.jl framework for neural networks in Julia. There are alternative frameworks such as Knet.jl, Lux.jl, etc. but I believe that Flux.jl is still the main effort with the largest number of maintainers.

JuliaAI

This organization hosts the modules of the MLJ.jl framework for traditional ML models. The framework itself is hosted elsewhere as explained next.

Other organizations

You can find other organizations not managed by the Julia community, such as the Alan Turing institute organization, which hosts the MLJ.jl framework for traditional ML models, or personal accounts hosting alternative frameworks such as BetaML.jl, ScikitLearn.jl, …

If you can’t find any ongoing effort in these places, then probably it doesn’t exist yet.

This effort exists in python because python is a very slow language. Traditional ML models implemented in Julia are implemented in a fast language, so the same motivation does not exist here. You do not need a NN framework to compute on, e.g., a GPU in Julia.

Perhaps. I am not a Python user myself, so I cannot make a statement either way. LIkelly, the models are implemented in C anyway with a Python wrapper?

What I learned is that the community generally do not think that speed is an issue in Julia. And that conversion to tensor-based computation and on the GPU is not worthwhile exploring.

Correct

This is not quite accurate. Shoehorning things into tensor computations are not always required for speed and is typically only done if this is the most natural way to express a computation. However, utilizing the GPU can be very useful in Julia as well, and there are several nice packages, like CUDA.jl, to make this easy. You might not have to use a GPU as often since the speed on the CPU is typically much faster than in python, which is nice :slight_smile:

Hmmm… worth looking at the work that @giordano has done on gettting Julia to run on Graphcore IPUs

Running Julia on Graphcore IPUs