# Unexpected memory cost with Julia

Recently I found my code had an unexpected memory cost while working with Julia.

See the following example:

``````using OMEinsum

len = 20
tensor1 = rand(len,len,len,len)
tensor2 = rand(len,len,len)
tensor3 = rand(len,len)

@time @ein tensorA[a,l,b,k,j] := tensor1[i, j, k, l] * tensor2[a,b,i]
@time @ein tensorB[b, d, k] := tensor3[a, b] * tensor2[a, d, k]

@time @ein tensorA[a,l,b,k,j] := tensor1[i, j, k, l] * tensor2[a,b,i]
@time @ein tensorB[b, d, k] := tensor3[a, b] * tensor2[a, d, k]
``````

output:

``````  2.807763 seconds (9.26 M allocations: 526.600 MiB, 9.24% gc time, 98.71% compilation time)
0.377838 seconds (1.04 M allocations: 52.680 MiB, 2.59% gc time, 99.94% compilation time)
0.034800 seconds (125 allocations: 50.117 MiB, 39.26% gc time)
0.000097 seconds (78 allocations: 70.391 KiB)
``````

The actual cost of the operation in my example would be:

`````` using OMEinsum
using BenchmarkTools

len = 20
tensor1 = rand(len,len,len,len)
tensor2 = rand(len,len,len)
tensor3 = rand(len,len)

@btime @ein tensorA[a,l,b,k,j] := tensor1[i, j, k, l] * tensor2[a,b,i]
@btime @ein tensorB[b, d, k] := tensor3[a, b] * tensor2[a, d, k]
``````

output:

``````  15.992 ms (126 allocations: 50.12 MiB)
16.452 μs (78 allocations: 70.39 KiB)
``````

I knew Julia would actually be a little slower when called the function in the package the first time, but I didn’t think it would cost 10 times more memory than expected, sometimes causing outofMemory errors.

I guess it has something to do with Julia precompilation and not the OMEinsum package. But I don’t quite understand, can someone give me some guidance on how to reduce memory overhead?

I find an even more outrageous example:

``````using Integrals
using SciMLSensitivity
using SphericalHarmonics
using FastTransforms
using BenchmarkTools
y=0.5
``````

This output:

``````14.107230 seconds (55.32 M allocations: 4.008 GiB, 6.54% gc time, 99.97% compilation time)
``````

And then run:

``````using Integrals
using SciMLSensitivity
using SphericalHarmonics
using FastTransforms
using BenchmarkTools
y=0.5

``````

The output is:

``````9.404 μs (113 allocations: 4.19 KiB)
``````

Are you actually running out of memory during compilation? What’s the peak memory usage of the Julia process?

What you posted doesn’t seem totally out of line for compiling what may be some complex code. It might take a similar amount of time for an ahead-of-time compiler. Keep in mind that the allocated memory amount reported is the sum of all allocations. In general, memory will be freed many times by the garbage collector throughout that period, so it should not be construed with memory usage at a single point in time or peak usage.

Julia has to compile your code to native code at some point. Julia is not an interpreted language.

If you would like to compile to native code ahead of time, I suggest looking into building a new system image:

I took a closer look at my code and found that I had made a mistake. I’m terribly sorry. Although the first compile operation consumes a lot of memory, it is not the cause of running out of memory.

I think the memory ran out because there was no GC in the nested loop

I think the memory ran out in my code because there was no GC in the nested loop. I found a similar problem.

Thank you very much, but I don’t think my code problem is due to precompilation