so, I am trying to understand how multi threading works, I started julia as `julia -t 16`

on a 32 cores node where I reserved 8, then I run the example from the documentation here these are the lines

```
function sum_single(a)
s = 0
for i in a
s += i
end
s
end
function sum_multi_good(a)
chunks = Iterators.partition(a, length(a) Γ· Threads.nthreads())
tasks = map(chunks) do chunk
Threads.@spawn sum_single(chunk)
end
chunk_sums = fetch.(tasks)
return sum_single(chunk_sums)
end
```

then I did the benchmarks (after loading `Benchmarktools`

) and here are the results:

```
julia> @benchmark sum_single(1:1_000_000)
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
Range (min β¦ max): 3.246 ns β¦ 13.705 ns β GC (min β¦ max): 0.00% β¦ 0.00%
Time (median): 3.250 ns β GC (median): 0.00%
Time (mean Β± Ο): 3.271 ns Β± 0.252 ns β GC (mean Β± Ο): 0.00% Β± 0.00%
ββ β β β
ββββββ
ββββββββββββββββββββ
ββββββββββββββββββββββββββββββββ β
3.25 ns Histogram: log(frequency) by time 3.5 ns <
Memory estimate: 0 bytes, allocs estimate: 0.
```

```
@benchmark sum_multi_good(1:1_000_000)
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min β¦ max): 11.700 ΞΌs β¦ 4.115 ms β GC (min β¦ max): 0.00% β¦ 96.91%
Time (median): 34.866 ΞΌs β GC (median): 0.00%
Time (mean Β± Ο): 38.832 ΞΌs Β± 57.699 ΞΌs β GC (mean Β± Ο): 1.98% Β± 1.37%
ββ
βββ
ββ
βββββββββββββββββββββββββββββββββ
β
β
β
β
ββββββββ
ββ
β
β
β
βββββββββ β
11.7 ΞΌs Histogram: frequency by time 62.3 ΞΌs <
Memory estimate: 8.89 KiB, allocs estimate: 106.
```

why is the parallel version so much slower than the single threaded? (11.7e3/3.25)!

In general I donβt seem to get significant gains , from multithreading, I am using it to evaluate the stiffness matrices of individual elements in a finite element program before assembling the structure stiffness matrix, each element evaluation is quite expensive but they independent from the others, so it seems sort of the best case scenario for parallel computing but still I donβe seem to get significant time savings

many thanks in advance!