Any benchmark about memory consumption?

There are some benchmarks testing the time to compute several tasks compared to other languages.
But what about memory consumption?
For example how well does Julia reshape a dataset from long to wide format compared to R’s data.table, dplyr, Matlab or Python?
Or how much memory it needs MixedModels.jl to fit a regression model with random effects compared to R’s lme4 or Stata.
Anything about memory leakage?
I mean many languages use more and more memory (even with GC) and at the end the program crashes. Does Julia have this problem?

2 Likes

That depends entirely on the implementation. However, Julia exposes many in-place, non-allocating operations that are not available in other dynamic languages. And there’s always the option to write your own non-allocating algorithm using loops and recursion, which, unlike in “vectorized languages”, will neither be slow nor allocate lots of intermediate temporary arrays.

Or how much memory it needs MixedModels.jl to fit a regression model with random effects compared to R’s lme4 or Stata.

Those are implemented in C so presumably they’re pretty good about not using excessive memory. Therefore it seems a bit unlikely that a Julia implementation would use less memory unless it uses different algorithms.

Anything about memory leakage?

Julia should not leak memory. Any leak should be reported as a bug with the appropriate package (if a package is causing the leak) or Julia itself if it is at fault (unusual but it does happen).

I mean many languages use more and more memory (even with GC) and at the end the program crashes. Does Julia have this problem?

No, it does not.

2 Likes

I think you want peak memory consumption? Simplest is just to use something like GNU time:

$ /usr/bin/time julia -e "Vector{Int}(undef,100_000_000)"
0.12user 0.04system 0:00.16elapsed 103%CPU (0avgtext+0avgdata 108816maxresident)k
56inputs+0outputs (0major+20732minor)pagefaults 0swaps
$ /usr/bin/time julia -e "zeros(Int,100_000_000)"
0.21user 0.36system 0:00.57elapsed 100%CPU (0avgtext+0avgdata 895368maxresident)k
56inputs+0outputs (0major+216335minor)pagefaults 0swaps

You see that doing essentially nothing takes ~100 MB and 170 ms, one integer takes ~8 bytes, and undef vectors are lazily materialized (until the kernel faults them in, they consume no memory at all).

Since the julia gc is afaik unable to track how much real memory is consumed, you probably need external tools that talk to the kernel.

2 Likes

I thought Julia Packages were implemented in Julia.

To check memory allocation in Julia you can use @allocated, but this won’t give you the real peak memory consumption, because Julia has some constant overhead since it needs to load LLVM/BLAS etc. But if you are just testing your implementation’s memory consumption, this is enough.

And I think we need such benchmark indeed, this is a way to evaluate the implementation of packages/stdlibs in Julia. In my case, I studied the performance of TensorOperations a few months ago:

And it turned out that TensorOperations actually use almost the same memory (with a constant overhead) but faster speed.

https://github.com/Jutho/TensorOperations.jl/issues/43

I believe you can test other packages/stdlib yourself as well (if this is crucial for you), it is not hard. It is just, in quite a few cases, we don’t really. care about the memory (buying larger RAM within 128GB is quite cheap).

2 Likes

But you can’t keep upgrading your memory indefinitely, it’s not so cheap and the computer has a maximum.

1 Like

They are; I was talking about lme4 and Stata which are implemented in C (or C++ as the case may be).

Also, Stata is not that great at this. Base reg and xtreg allocate a ton of memory when using random effects. Plus due to the proprietary licensing of stata you can’t use fixed effects on more than 5000 levels without upgrading to Stata SE, since each new dummy counts as a variable. lme4 uses sparse matrices in their random effects term, which is great and I don’t think Stata does at all. MixedModels.jl uses sparse in the same way lme4 does.

The current bar for Stata is reghdfe, which is iterative and never allocates the random effects matrix directly, sparse or otherwise. FixedEffectsModels implements a similar algorithm, and I suspect other do as well, or will soon.

2 Likes

In these benchmarks:

You can see that Julia almost always needs much more memory than C++ or Python.

I guess this is an old thread, and has been discussed before; but from quickly browsing that page it looks like they are measuring the time to spin up a Julia process, with precompilation and everything, in addition to the actual benchmark.

Yes, but this is what really affects the user when using Julia, the total time and memory, including “compilation”. This memory consumption will be limiting the maximum size of the problems your computer can deal with.

That’s misleading. Compilation only happens once (per input type combination). If you use the function several times, it goes away.

1 Like