First time user of CUDA.jl. My Julia versioninfo() is
Julia Version 1.12.2
Commit ca9b6662be4 (2025-11-20 16:25 UTC)
Build Info:
Official https://julialang.org release
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: 32 × AMD Ryzen 9 9950X3D 16-Core Processor
WORD_SIZE: 64
LLVM: libLLVM-18.1.7 (ORCJIT, znver5)
GC: Built with stock GC
Threads: 16 default, 1 interactive, 16 GC (on 32 virtual cores)
Environment:
JULIA_EDITOR = code
Following ] test CUDA:
┌ Info: System information:
│ CUDA toolchain:
│ - runtime 13.0, artifact installation
│ - driver 580.105.8 for 13.0
│ - compiler 13.0
│
│ CUDA libraries:
│ - CUBLAS: 13.1.0
│ - CURAND: 10.4.0
│ - CUFFT: 12.0.0
│ - CUSOLVER: 12.0.4
│ - CUSPARSE: 12.6.3
│ - CUPTI: 2025.3.1 (API 13.0.1)
│ - NVML: 13.0.0+580.105.8
│
│ Julia packages:
│ - CUDA: 5.9.4
│ - CUDA_Driver_jll: 13.0.2+0
│ - CUDA_Compiler_jll: 0.3.0+0
│ - CUDA_Runtime_jll: 0.19.2+0
│
│ Toolchain:
│ - Julia: 1.12.2
│ - LLVM: 18.1.7
│
│ 1 device:
└ 0: NVIDIA RTX A2000 12GB (sm_86, 10.240 GiB / 11.994 GiB available)
2 errors, both the same and related to linalg.jl
Error in testset gpuarrays/linalg/core:
Error During Test at /home/rogerp/.julia/packages/GPUArrays/E99Fj/test/testsuite/linalg.jl:320
Test threw exception
Expression: compare(f, AT, A_empty, d)
DivideError: integer division error
Is this something I’ve done? I don’t see anyone else reporting this issue.
Happy to post the whole output but I’ve forgotten how.
Roger