Thank You for these benchmarks for Memoization performance ,
I understand your benchmarks and book here might have been before LRUCache
but the thread safe ( aka no race conditions ) use case is very important when you want reliable and predictable results like so "*A particular use case of this package is to implement function memoization for functions that can simultaneously be called from different threads."
https://github.com/JuliaCollections/LRUCache.jl
Maybe time to update the website and/or revise book ?
HTH
Ps> Maybe “There is nothing new under the sun ?” so I can’t help noticing this is recapitulating the design for Java ConcurrentHashMap and table lookups as described here >> java - Is a HashMap thread-safe for different keys? - Stack Overflow