Following a thread on machine-learning on Julia slack, i’m interested in putting together a Julia implementation of “Liquid Neural Networks”. These are time-adaptive neural networks, inspired by neuroscience models, where each neuron’s activity is a differential equation. The synapses are also plastic and continue to adapt after training. I think this is very interesting, and Julia has a great ecosystem for automatic differentiation with differential equations.
Anyone interested in teaming together to cook up a Julia implementation?
I’ve uploaded an implementation as a public Pluto notebook to JuliaHub. Search for LTC and it should pop up. It’s tested on the sequential MNIST benchmark mentioned in the Hasani et al. (2021) paper.