First time user of CUDA.jl. My Julia versioninfo() is
Julia Version 1.12.2
Commit ca9b6662be4 (2025-11-20 16:25 UTC)
Build Info:
Official https://julialang.org release
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: 32 Γ AMD Ryzen 9 9950X3D 16-Core Processor
WORD_SIZE: 64
LLVM: libLLVM-18.1.7 (ORCJIT, znver5)
GC: Built with stock GC
Threads: 16 default, 1 interactive, 16 GC (on 32 virtual cores)
Environment:
JULIA_EDITOR = code
Following ] test CUDA:
β Info: System information:
β CUDA toolchain:
β - runtime 13.0, artifact installation
β - driver 580.105.8 for 13.0
β - compiler 13.0
β
β CUDA libraries:
β - CUBLAS: 13.1.0
β - CURAND: 10.4.0
β - CUFFT: 12.0.0
β - CUSOLVER: 12.0.4
β - CUSPARSE: 12.6.3
β - CUPTI: 2025.3.1 (API 13.0.1)
β - NVML: 13.0.0+580.105.8
β
β Julia packages:
β - CUDA: 5.9.4
β - CUDA_Driver_jll: 13.0.2+0
β - CUDA_Compiler_jll: 0.3.0+0
β - CUDA_Runtime_jll: 0.19.2+0
β
β Toolchain:
β - Julia: 1.12.2
β - LLVM: 18.1.7
β
β 1 device:
β 0: NVIDIA RTX A2000 12GB (sm_86, 10.240 GiB / 11.994 GiB available)
2 errors, both the same and related to linalg.jl
Error in testset gpuarrays/linalg/core:
Error During Test at /home/rogerp/.julia/packages/GPUArrays/E99Fj/test/testsuite/linalg.jl:320
Test threw exception
Expression: compare(f, AT, A_empty, d)
DivideError: integer division error
Is this something Iβve done? I donβt see anyone else reporting this issue.
Happy to post the whole output but Iβve forgotten how.
Roger