No, memoization is not a fix all here. For clarity, memoization is simply storing input->output mappings for future reuse. There are cases where this applies, but it’s not pervasive. If you want to speed up codes where you’re doing a lot of ODE simulations with the same options but different initial conditions, or solving a ML model like a random forest with a few top level literal options, none of the internal parts will likely ever see exactly the same floating point calculations and thus memoization wouldn’t speed anything up here (in fact it would slow it down since it would build a useless table).
I realized that to fully explain how what your saying makes no sense requires going into technical details and describing how the language features allow for a ton of optimizations and package building utilities. This is too much for a forum post, so I’ll post a blog post soon.