Consider the situation where numerous lightweight objects depend all on a heavy-weight context (something akin to the Flyweight pattern in OOP). Here’s how it looks like in my projects:
The heavy-weight context:
mutable struct LiftedComplex{N,D}
d::Vector{Matrix{Tuple{Int, SVector{N, Int}}}} # Some important data
# More data...
cache::Dict{Symbol, Any}
end
The lightweight object:
struct LiftedSimplex{Ds, N, D}
lc::LiftedComplex{N, D} # The parent LiftedComplex
idx::Int # The index of the simplex
trans::SVector{N, Int} # The translation
end
Some functions involving the lightweight objects can be accelerated by precomputing more data in the heavyweight context. The dictionary cache
is intended to store this redundant data. When such a function is called, the dictionary is checked for the corresponding key. If the key exists, the value data is used, otherwise it is first precomputed and stored in the dictionary.
My question concerns the thread safety. I reckon that as described, the code is not thread safe, and think about adding a lock to the heavy-weight context, to be locked before checking for the key and unlocked after the data is created or used. The question is: are there caveats in this approach?