I have a complex modeling code that involves a while loop which could be run a million times. In this loop, there are many, but some no-so complex mathematical operations involving SVectors and Arrays of SVectors mainly.
As the same elements are involved during each iteration, I am expecting the memory usage to be constant no matter how many interations are performed. However, I see that the memory usage grows with time.
How could I find the origin of this memory build-up?
I tried to run my code using julia --track-allocation=user but it gives me a cumulative usage of the memory per line of code. This therefore does not tell me if the memory was freed or not at the end of the iteration.
Have you run the loop with @allocated
Also I am sure you will be asked for a sample of the code in this thread! So please post a minumum working eample.
Yes I have tried with @allocated, but it just give me a single number at the end. Of course this number increases with the number of steps done by the loop.
The loop is very long, and I am still struggling to find the smallest working example.
I am using arrays of SVectors that I have to reset to [0, 0, 0] at every time step. I suppose every time the reset is performed, it allocates a new array without freeing the old one. I guess it would eventually free it, but only once the loop is finished.
I have tried this. I added GC.gc() at the end of the while loop such that it is executed at the end of each step. Unfortunately, this did not decrease the memory use. It only increase the computation time.
Actually, what I do is as follows. If x is an array of SVectors, I zero it this way:
fill!(x, @SVector [0., 0., 0.])
The length of x i quite big, this is why I am doing it this way. I suppose that this do not erase what was in x, but reallocate it.
Is there a better way to do this?
Factor parts of your code into smaller functions and check those.
That said, some allocation is acceptable for certain calculations and may not be the bottleneck in your computation — it is hard to say without actual code that we can run. If you are just allocating a single large array every iteration, it is unlikely to be a major problem and GC will just deal with the garbage when necessary.
Have you compared the memory growth with a version using regular vectors instead of SVectors? I realise that might (or might not) degrade execution speed.
I wonder if there’s an accumulation of copies of immutable SVectors. MVectors might be worth a try.
Replacing SVectors by MVectors has helped for memory consumption. Thanks for this advice.
However, I found the real problem. I am using the module WriteVTK to write vtk outputs. It turns out that when using compressed vtk outputs, it creates memory build-up. I don’t know why, but this will have to be reported to the developers of this package.
SVectors shouldn’t be heap allocated at all and this never show up in your tracked allocations; using MVector should only be able to make things worse.