Reservoir computing or echo state networks in Flux.jl or Knet.jl

Can any of the Julia machine learning frameworks do reservoir computing or echo state computing? Are there examples how to configure Flux.jl or Knet.jl to do either of these on a simple problem?

As far as I understand it, reservoir computing is a form of recurrent neural network in which the weights and connections of the network and the input->network map are randomly assigned and held fixed, and training is applied only to the network-> output map.

A student of mine wants to replicate and extend the results of the following paper, which applies reservoir computing to predicting chaotic and spatiotemporally chaotic dynamical systems. Neither of us knows much about machine learning; we’re hoping to learn though this project.

I’m going to start digging into the Knet tutorial…

2 Likes

Flux could certainly train an echo state network, I do not know of such an implementation though.

If I’m not mistaken, echo state networks are largely obsolete today. Ideas from reservoir computing can certainly be used to initialize the weights of an RNN such that the matrix that transforms the hidden state has eigenvalues corresponding to known resonances in the data. Once initialized, there is no reason I know of today not to train all parameters.

That might be true for echo state networks, but reservoir computing as a whole has other, more powerful models, like Liquid State Machines (which use a randomly connected spiking neural network in the reservoir, which isn’t trivially trainable). If you could get one of those working in Flux or Knet (which would definitely be a bit more difficult than an ESN), it would be considerably more powerful in my opinion.

EDIT: To respond to the initial question that was asked: Flux can definitely make this work, you’d just have to make sure to only consider the connections from reservoir to readout layer in your training function. If you go with an LSM, you’ll also need to decide how you want to convert from spikes to continuous values, which will need to be differentiable.

I’ve never used Knet before, so I won’t say whether or not it’s possible (although it probably is).

2 Likes

Hi! I’m also interested in replicating the results of the cited paper in Julia. Is there any advances on implementing reservoir computing? John, did your student manage to implement it? Thanks!

My student found a Matlab package for reservoir computing and is using that rather than reimplenting in Julia. Not ideal, but pragmatic.

Just a friendly FYI , Note that Julia executes about 10x faster than Matlab

https://juliaobserver.com/packages/LowRankApprox
takes:
~0.02 s using LowRankApprox in Julia – 10x faster than Matlab , 3x faster than Python-Fortran !!
~0.07 s using SciPy in Python (calling a Fortran backend; see PyMatrixID)
~0.3 s in MATLAB

This difference can be attributed in part to both
) Julia algorithmic improvements as well as to
) Some Julia low-level optimizations.


To teach your students “How to Fish Better” ,
Also consider using this Keyword Package Search / KWS here >> https://juliaobserver.com/ >>
Where I Found this for you:

Probability :game_die:

[ ] 1977: Expectation Maximization (Arthur Dempster, Nan Laird, Donald Rubin)
[ ] 1978: VEGAS Monte-Carlo (G.P. Lepage)
[ ] 1984: Gibbs sampling (Stuart and Donald Geman)
[ ] 1985: **Reservoir sampling** (Jeffrey Vitter)

From a cursory search at https://juliaobserver.com/ ( KWS ( reservoir )) ==>> https://juliaobserver.com/packages/Algo

Also, its less pedantic, but more “to the point” take a look here for your **strictly higher level **
functional requirements >> https://juliaobserver.com/packages/ChaosTools , https://juliaobserver.com/packages/DynamicalSystemsBase ,

HTH and Cheers !
-Marc Cox

Ps> I’d be interested if you/students “made the leap” to implement this using Machine Learning Flux.jl or Knet.jl - or if you simply find/found that Numerical Methods was sufficient or even preferable.