One thing I’m particularly interested in is calculating the entanglement entropy for various lattice models. This gives some low hanging fruit to start out with recreating results in the literature but allows one to branch out pretty fast.

Having originally come from Loop Quantum Gravity, I feel your pain. Though to be fair, its not actually obvious to me that there * isn’t* some transformation group which neural networks are covariant under so it may be that these arrays actually are tensorial with respect to

**some**information theoretic transformation. But that sounds like a non-trivial statement to explore.