Unexpected memory allocation from @timev?

I have written a piece of code consisting of hundreds of lines. Everything is defined in functions. When running this code, I basically calculate a temperature profile along a loop in time. After running this code and checking with @timev I get

  0.556749 seconds (254.11 k allocations: 1.163 GiB, 37.83% gc time)
elapsed time (ns): 556748781
gc time (ns):      210599823
bytes allocated:   1249258184
pool allocs:       149066
non-pool GC allocs:105042
GC pauses:         55
full collections:  1

I was shocked to see the memory allocation and the # of allocations. What I noticed is that if I run the loop in twice the number of time steps, the allocation is doubled as well. Hence the more time steps I run, the more memory is allocated.

My questions are: is the allocated memory the total amount of memory that my script requires while running? Or is memory overwritten each time step? And what do “pool allocs”, “non pool GC allocs” and “GC pauses” mean? I cannot find this information on internet.

Thanks for helping!

No, it is the total amount of memory allocated during the execution of the program but a lot of that memory will also be freed. Frequent allocating and deallocating can be bad for performance.

They are related to the implementation of the garbage collector. Use @time instead unless you are really interested in those extra numbers.

Thanks Kristoffer! So how do I know how much memory is really required for my script? If I run this script on the cluster, I need to estimate how much physical memory is required.

You can run with time (I’m on a mac so I am using gtime for the GNU time command)

 gtime --verbose julia -e 1+1
        Command being timed: "julia -e 1+1"
        User time (seconds): 0.18
        System time (seconds): 0.10
        Percent of CPU this job got: 121%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.23
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 113748
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 1
        Minor (reclaiming a frame) page faults: 44538
        Voluntary context switches: 0
        Involuntary context switches: 1342
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

The Maximum resident set size (kbytes) should give you the maximum memory (in RAM) used by the process.

This is a great tip. I am on a mac as well and installed gtime via homebrew. Output now is

	Command being timed: "julia stability.jl"
	User time (seconds): 0.58
	System time (seconds): 0.14
	Percent of CPU this job got: 173%
	Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.41
	Average shared text size (kbytes): 0
	Average unshared data size (kbytes): 0
	Average stack size (kbytes): 0
	Average total size (kbytes): 0
	Maximum resident set size (kbytes): 130732
	Average resident set size (kbytes): 0
	Major (requiring I/O) page faults: 2
	Minor (reclaiming a frame) page faults: 48802
	Voluntary context switches: 0
	Involuntary context switches: 1034
	Swaps: 0
	File system inputs: 0
	File system outputs: 0
	Socket messages sent: 0
	Socket messages received: 0
	Signals delivered: 0
	Page size (bytes): 4096
	Exit status: 0

So my conclusion is that the script takes 130732 kB = 130 MB. Still a lot, but if I look at your “1+1” script (117748 kB) I guess there is a default overhead consumed by the Julia interpreter? Because “1+1” does not consume a lot :slight_smile:

That is a good way to measure how much memory is actually used by the process, but is it also reasonable to use it to measure how much memory is required?

In some languages/frameworks (like .NET) these are two very different concepts: if there’s a lot of memory available on the system, a process can be allowed to hold on to quite a bit of memory before full GC kicks in. Looking at process memory usage can therefore be a bad estimator for the minimum amount of memory required. I don’t know how this works in Julia.

1 Like