Calling Flux.gpu downloads CUDNN artifact every first call in REPL/script

Pretty much as the title says. Whenever I call e.g.

Import Flux
d_gpu = Flux.gpu(Flux.Dense(10, 10))
┌ Warning: The NVIDIA driver on this system only supports up to CUDA 11.0.0.
│ For performance reasons, it is recommended to upgrade to a driver that supports CUDA 11.2 or higher.
└ @ CUDA ~/.julia/packages/CUDA/DfvRa/src/initialization.jl:70
│ Downloading artifact: CUDNN

whether in REPL or a script it starts downloading a CUDNN artifact (doesn’t show any version information).

Regarding package information. ] status returns:

Status `~/.julia/environments/v1.8/Project.toml`
  [6e4b80f9] BenchmarkTools v1.3.1
  [052768ef] CUDA v3.12.0
  [aae7a2af] DiffEqFlux v1.52.0
  [0c46a032] DifferentialEquations v7.4.0
  [31c24e10] Distributions v0.25.75
  [587475ba] Flux v0.13.6
  [c8e1da08] IterTools v1.4.0
  [033835bb] JLD2 v0.4.23
  [d3d80556] LineSearches v7.2.0
  [b2108857] Lux v0.4.24
  [961ee093] ModelingToolkit v8.23.0
  [76087f3c] NLopt v0.6.5
  [872c559c] NNlib v0.8.9
  [a00861dc] NNlibCUDA v0.2.4
  [429524aa] Optim v1.7.3
  [3bd65402] Optimisers v0.2.9
  [7f7a1694] Optimization v3.9.0
  [4e6fcdb7] OptimizationNLopt v0.1.0
  [36348300] OptimizationOptimJL v0.1.2
  [42dfb2eb] OptimizationOptimisers v0.1.0
  [842ac81e] OptimizationQuadDIRECT v0.1.0 `https://github.com/SciML/Optimization.jl:lib/OptimizationQuadDIRECT#master`
  [d330b81b] PyPlot v2.11.0
  [dae52e8d] QuadDIRECT v0.1.2 `https://github.com/timholy/QuadDIRECT.jl.git#master`
  [2913bbd2] StatsBase v0.33.21
  [d1185830] SymbolicUtils v0.19.11
  [0c5d862f] Symbolics v4.10.4
  [37e2e46d] LinearAlgebra

and nvidia-smi shows CUDA Version: 11.0 (this one’s out of my control). How can I avoid the download of the artifact every time? Any help is appreciated, if you need more information - let me know and I’ll do what I can.

Try running with JULIA_DEBUG=CUDA, that should print out some more details:

Is that in the REPL? As I’ve just tried that and get

julia> import CUDA
julia> JULIA_DEBUG=CUDA
CUDA
julia> import Flux
julia> a = Flux.Dense(10, 10)
Dense(10 => 10)     # 110 parameters
julia> b = Flux.gpu(a)
│┌ Warning: The NVIDIA driver on this system only supports up to CUDA 11.0.0.
││ For performance reasons, it is recommended to upgrade to a driver that supports CUDA 11.2 or higher. │  Downloaded artifact: CUDNN
│  Downloaded artifact: CUDNN
│┌ Warning: CUDA.jl found cuda, but did not find libcudnn. Some functionality will not be available.
│└ @ Flux ~/.julia/packages/Flux/4k0Ls/src/functor.jl:195
│Dense(10 => 10)     # 110 parameters

Hope that helps.

No, that’s an environment flag you need to set before starting Julia.

Ah, thank you, my apologies, I was not aware of that. I tried it and it doesn’t seem to give any extra information, at least based on the calls that I’ve made after setting it.

Can you post the output of CUDA.versioninfo()?

Ah, thank you, good suggestion, got some more weirdness!

julia> import CUDA
julia> CUDA.versioninfo()
┌ Warning: The NVIDIA driver on this system only supports up to CUDA 11.0.0.
│ For performance reasons, it is recommended to upgrade to a driver that supports CUDA 11.2 or higher.
└ @ CUDA ~/.julia/packages/CUDA/DfvRa/src/initialization.jl:70
CUDA toolkit 11.7, artifact installation
NVIDIA driver 450.203.3, for CUDA 11.0
CUDA driver 11.0

Libraries: 
- CUBLAS: 11.10.1
- CURAND: 10.2.10
- CUFFT: 10.7.2
- CUSOLVER: 11.3.5
- CUSPARSE: 11.7.3
- CUPTI: 17.0.0
- NVML: 11.0.0+450.203.3
  Downloaded artifact: CUDNN
  Downloaded artifact: CUDNN
- CUDNN: missing
  Downloaded artifact: CUTENSOR
- CUTENSOR: 1.4.0 (for CUDA 11.5.0)

Toolchain:
- Julia: 1.8.0
- LLVM: 13.0.1
- PTX ISA support: 3.2, 4.0, 4.1, 4.2, 4.3, 5.0, 6.0, 6.1, 6.3, 6.4, 6.5, 7.0
- Device capability support: sm_35, sm_37, sm_50, sm_52, sm_53, sm_60, sm_61, sm_62, sm_70, sm_72, sm_75, sm_80

1 device:
  0: GeForce RTX 2060 (sm_75, 5.708 GiB / 5.795 GiB available)