Simplechains.jl vs. George Hotz & Tinygrad?

George Hotz recently stepped down from comma.ai and announced he may devote more effort to his Tinygrad package:

I’m considering another company, the Tiny Corporation. Under 1000 lines, under 3 people, 3x faster than PyTorch? For smaller models, there’s so much left on the table. And if you step away from the well-tread ground of x86 and CUDA, there’s 10x+ performance to gain. Several very simple abstractions cover all modern deep learning, today’s libraries are way too complex.

Superficially, this sounds a bit like the goals of SimpleChains.jl. Question is how SimpleChains.jl differs in approach and goals from his Tinygrad. The speed improvements vs. PyTorch sound pretty comparable. I think both are currently CPU only, and in short term Tinygrad may support Apple Silicon and Google TPU, and long term they want to do their own hardware. What do people think, will SimpleChains exceed it?

7 Likes

I agree with their statement that 90% of what is required is just an efficient way to calculate gradients. Perhaps, Julia could have AD as part of standard library (apart from json, csv and http handling).

Would be interesting to compare both on MNIST.

My long term plan for SimpleChains is for it to be based on LoopModels + Enzyme.

For now, it is LoopVectorization.jl + pull back definitions.
Memory management is manual, and we should create a better API for that with cleaner separation.

8 Likes