Performance difference in Colab and Jupyter notebook

I set up Julia in Google Colab using Gordan MacMillan’s Github code. I executed the following few lines to compare the performance of Julia in Google Colab and Jupyter Notebook in Ubuntu 20.04 WSL.

using BenchmarkTools

random_image_cpu = randn(100, 100, 3, 100)

mm = sum

println("CPU (s):")
@benchmark mm(random_image_cpu)

In Google Colab, I got the following output [ ~1.2 GB RAM ]:

CPU (s):
BenchmarkTools.Trial: 
  memory estimate:  16 bytes
  allocs estimate:  1
  --------------
  minimum time:     899.046 μs (0.00% GC)
  median time:      963.967 μs (0.00% GC)
  mean time:        982.151 μs (0.00% GC)
  maximum time:     2.087 ms (0.00% GC)
  --------------
  samples:          5053
  evals/sample:     1

In Jupyter Notebook used within Ubuntu 20.04 WSL, I get the following output [ 8GB RAM and 4 physical cores]:

BenchmarkTools.Trial: 
  memory estimate:  16 bytes
  allocs estimate:  1
  --------------
  minimum time:     1.101 ms (0.00% GC)
  median time:      1.235 ms (0.00% GC)
  mean time:        1.268 ms (0.00% GC)
  maximum time:     2.187 ms (0.00% GC)
  --------------
  samples:          3915
  evals/sample:     1

Based on what I have been reading online, Google Colab CPU run is much slower than when run in laptop. I can also clearly see a difference in the RAM usage. The only other reason I can find is that Colab has GPU enabled (although I am not explicitly using it).

  1. Why is Google Colab performing faster?
  2. Has the performance difference got something to do with me using WSL and not a dedicated Ubuntu system?
  3. Does a Julia code make use of the extra cores or the GPU if available, without mentioning it? If so, how can we find out about their usage?
  1. probably because you’re running inside of WSL and you’re having to deal with a slower abstraction layer.
  2. perhaps.
  3. nothing you’re doing in that code will take advantage of multithreading or GPUs.

FWIW, on an Ubuntu virtual machine, I’m getting slower performance than you are:

julia> @benchmark mm(random_image_cpu)
BenchmarkTools.Trial: 
  memory estimate:  16 bytes
  allocs estimate:  1
  --------------
  minimum time:     221.761 μs (0.00% GC)
  median time:      1.375 ms (0.00% GC)
  mean time:        1.383 ms (0.00% GC)
  maximum time:     2.863 ms (0.00% GC)
  --------------
  samples:          3555
  evals/sample:     1

This is with 32GB of RAM, so it’s not a memory issue (the array is only about 23MB.)

1 Like

I just installed Julia through Atom in Windows 10, and ran the same code in REPL. I expected the performance to improve, since I am not using WSL. I am getting roughly the same performance though.

julia> @benchmark mm(random_image_cpu)
BenchmarkTools.Trial: 
  memory estimate:  16 bytes
  allocs estimate:  1
  --------------
  minimum time:     1.145 ms (0.00% GC)
  median time:      1.277 ms (0.00% GC)
  mean time:        1.313 ms (0.00% GC)
  maximum time:     3.486 ms (0.00% GC)
  --------------
  samples:          3784
  evals/sample:     1