Threads.@threads for i in 1:length(chunk)
memory_intensive_computation(chunk[i])
end
I don’t know a priori how memory intensive the computation will be, though I can assume it’s similar for items within the chunk. Right now, I have to allocate resources with a high enough memory to Julia threads ratio that this rarely runs out of memory, but this strategy is often wasteful. It’d be nice to something more like
memory_per_item = estimate_memory(chunk[1])
n_threads = floor(Int, system_memory / (memory_per_item * safe_inflation_factor))
Threads.@threads n_threads for i in 1:length(chunk)
memory_intensive_computation(chunk[i])
end
i.e. dynamically choose the right parallelism level. What’s the best way to achieve this kind of pattern?
using OhMyThreads
memory_per_item = estimate_memory(chunk[1])
@tasks for i in eachindex(chunk)
@set ntasks = floor(Int, system_memory / (memory_per_item * safe_inflation_factor))
memory_intensive_computation(chunk[i])
end
or
using OhMyThreads
memory_per_item = estimate_memory(chunk[1])
ntasks = floor(Int, system_memory / (memory_per_item * safe_inflation_factor))
tforeach(eachindex(chunk); ntasks) do i
memory_intensive_computation(chunk[i])
end