A new architecture was dropped, vastly different from deep learning. In case you were interested in implementing something in Julia, maybe this?

Any movement on this? Has anyone implemented an example from the paper using Flux.jl? Its a bit beyond my knowledge but would be super curious to see how its done.

There seems to be a new package implementation now: GitHub - rbSparky/KolmogorovArnoldNetworks.jl: KolmogorovArnoldNetworks.jl is a Julia library providing implementations of Kolmogorov-Arnold neural networks

Wow! Julians are fast! Let’s see how well we do!

Still, this is arguably the early day of KAN. Let’s see where it goes.

KAN was a more elegant replacement for something like MLP, but what about CNN or Transformer? Only time will tell. Or well, if you want to try your luck, maybe go implement it?

I am curious about this new neural network architecture, which fits the problem by learning the activation function, thus having higher interpretability and being well-suited for symbolic regression. Does this mean that compared to the MLP, the KAN model is more suitable for solving Neural Ordinary Differential Equations (Neural ODEs)?

Here’s another Julia implementation if people want to start experimenting. This contains some key implementation changes that speed up the evaluation in comparison to the original architecture.