I set up Julia in Google Colab using Gordan MacMillan’s Github code. I executed the following few lines to compare the performance of Julia in Google Colab and Jupyter Notebook in Ubuntu 20.04 WSL.
using BenchmarkTools
random_image_cpu = randn(100, 100, 3, 100)
mm = sum
println("CPU (s):")
@benchmark mm(random_image_cpu)
In Google Colab, I got the following output [ ~1.2 GB RAM ]:
CPU (s):
BenchmarkTools.Trial: 
  memory estimate:  16 bytes
  allocs estimate:  1
  --------------
  minimum time:     899.046 μs (0.00% GC)
  median time:      963.967 μs (0.00% GC)
  mean time:        982.151 μs (0.00% GC)
  maximum time:     2.087 ms (0.00% GC)
  --------------
  samples:          5053
  evals/sample:     1
In Jupyter Notebook used within Ubuntu 20.04 WSL, I get the following output [ 8GB RAM and 4 physical cores]:
BenchmarkTools.Trial: 
  memory estimate:  16 bytes
  allocs estimate:  1
  --------------
  minimum time:     1.101 ms (0.00% GC)
  median time:      1.235 ms (0.00% GC)
  mean time:        1.268 ms (0.00% GC)
  maximum time:     2.187 ms (0.00% GC)
  --------------
  samples:          3915
  evals/sample:     1
Based on what I have been reading online, Google Colab CPU run is much slower than when run in laptop. I can also clearly see a difference in the RAM usage. The only other reason I can find is that Colab has GPU enabled (although I am not explicitly using it).
- Why is Google Colab performing faster?
- Has the performance difference got something to do with me using WSL and not a dedicated Ubuntu system?
- Does a Julia code make use of the extra cores or the GPU if available, without mentioning it? If so, how can we find out about their usage?