Is it time to migrate to CUDA.jl?

From documentation of corresponding packages:

  • CuArrays are deprecated in favor of CUDA.jl
  • CUDA.jl only works on Julia 1.4
  • latest LTS version of Julia is 1.0

From these facts I conclude that the only way to keep backward compatibility is to not migrate to CUDA.jl and keep using deprecated packages. Is this conclusion correct or I’m missing some workaround?

1 Like

The JuliaCon State of Julia mentioned that Julia v1.0 is feeling very outdated, and that 1.6 will be the next LTS.

That’s my 5 cents (more like 1 cent).

3 Likes

Strictly speaking, new LTS version doesn’t mean immediate migration to it, at least not in industry. For example, Ansible added support for Python 3 only in 2018, 10 years after Python 3.0 was released. Latest version of Spark still supports Java 8 (latest LTS is Java 11, latest overall is Java 14) and Python 2.7 (latest is 3.8). I’m working on 1-year-old Linux Mint based on 2-year-old Ubuntu version. Julia 1.0, released on August 8, 2018 doesn’t seem that old :slight_smile:

Yet, given that Julia and my package don’t have that many users (yet), I decided to drop support for Julia < 1.4 and migrate to CUDA.jl.

Since CUDA.jl integrates tightly with the base Julia (its compiler, but also redefining internal functions to make array operations GPU compatible), it’s not that easy to maintain compatibility with older versions of Julia. 1.3 might still be possible, but beyond that we don’t have artifacts. I guess we could conditionalize all that code, but unless there’s a specific need that only makes the code more messy.

1 Like

Then, what’s the best CUDA.jl alternative working on Julia v1.5?

Thanks for clarification! I actually enjoy both - progress in CUDA programming and the way the team treats breaking changes, i.e. decision to keep CuArrays & co. for older versions of Julia and make a new package for the newer versions. The only piece I was missing is the intended way to switch. Hopefully, this post will help others realize how to do it :slight_smile:

Current CUDA.jl should work with Julia 1.4+ (just tested some basic operations on 1.5/CUDA 11 the other day).

With https://github.com/JuliaGPU/CUDA.jl/pull/377 the next version will support 1.3+, but for the versions after that we’ll need to aggressively move forward (first 1.5+, then 1.6 or higher once it’s released) due to significant changes in Julia that do not provide backwards compatibility (mostly internal compiler APIs).

1 Like

In this scenario, what would be the best way to support several versions of Julia in a package that has CUDA.jl in its dependencies? Is there a way to conditionally specify different versions of CUDA.jl in Project.toml?

Are there any plans of supporting GPU data frames like this one: https://github.com/rapidsai/cudf

Just list all of the major (i.e. breaking) versions that your package is compatible with, so currently CUDA = "0.1, 1". The package resolver should pick a version that’s compatible with the user’s Julia.

1 Like