Right up front, I work for Colab, and I think the Julia language is pretty awesome. I have some math and data science background and Julia has felt like a very natural language and I’ve wanted to see it in Colab for some time (as part of the original three in Jupyter).
Currently, we don’t provide a lot of pre-installed packages right now but I’d love to hear feedback about your experiences of the Julia language in Colab and where it hits and where it misses!
Wow! This is incredible news @metrizable! It sounds like you had an outsize role in making this happen alongside your team – many thanks! This has been a requested feature for years so it is exciting to see it happen.
A quick and dirty CPU vs GPU benchmark! Using the (free!) “T4 GPU” runtime. Stopped at 10^8 because 10^9 exceeded either the 12.7GB of RAM or 15.0GB of VRAM.
using Plots, CUDA, BenchmarkTools
pMax = 8
powerVector = 1:pMax
timeVectorCPU = Vector{Float16}(undef, pMax)
timeVectorGPU = Vector{Float16}(undef, pMax)
for p in powerVector
n = 10^p
xCPU, yCPU = (ones(n), ones(n))
xGPU, yGPU = (cu(xCPU), cu(yCPU))
timeVectorCPU[p] = @belapsed $xCPU + $yCPU
timeVectorGPU[p] = @belapsed $xGPU + $yGPU
end
timeVectorCPU |> display
timeVectorGPU |> display
plot(
10 .^ powerVector,
[timeVectorCPU timeVectorGPU],
label = ["CPU" "GPU"],
title = "CPU vs GPU",
xscale = :log10,
yscale = :log10,
ylabel = "Elapsed time [s]",
fmt = :png
)
As of a couple months ago, Kaggle started deriving their Docker images from Colab’s public releases, so this is a real possibility in the near future. Colab hasn’t pushed the image with the Julia kernel to the public repo yet, but some time following this at least the bits should be there.
Currently in Colab, we don’t provide a simple way to determine the path in Google Drive of the notebook you’re editing. However, I am aware of code snippets to allow this discovery, but they’re a little slow.
One way to access the artifacts on Google Drive is to mount it. A smooth UX in Colab for Julia is not present, but it is available in Python. Until something is available, one “hacky” way could be to use the Python runtime to mount Google Drive (and it remains mounted), then switch to the Julia runtime (via the Runtime menu (or patching the session)).
Agreed. I had tried similar, but it seems the Python that’s invoked is not associated with the IJulia kernel. There may be something more we could do here to hook those up.
First in the “Select a runtime”, you need to choose the GPU accelerator instead of the CPU. Then Pkg.add("CUDA") to the environment (they’re working on getting CUDA pre-added to GPU runtimes; see metrizable’s link). THEN we can using CUDA; CUDA.versioninfo().