ERROR: UndefVarError: parameters not defined

Hi,
I want to do some reinforcement learning in Julia, but when I want to perform my action, it always gives me this error. do you know what i should do? I already downloaded the latest version of the packages.

function action(s_norm)

    println("greedy Action")

    act_values = model(s_norm |> gpu)

    a = Flux.onecold(act_values)

end

the “model” is my neural network.

greedy Action
ERROR: UndefVarError: parameters not defined
Stacktrace:
[1] classify_arguments(job::GPUCompiler.CompilerJob, codegen_f::LLVM.Function)
@ GPUCompiler C:\Users\aetan.julia\packages\GPUCompiler\XwWPj\src\irgen.jl:312
[2] lower_byval(job::GPUCompiler.CompilerJob, mod::LLVM.Module, entry_f::LLVM.Function)
@ GPUCompiler C:\Users\aetan.julia\packages\GPUCompiler\XwWPj\src\irgen.jl:354
[3] process_entry!(job::GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams, GPUCompiler.FunctionSpec{GPUArrays.var"#broadcast_kernel#16", Tuple{CUDA.CuKernelContext, CuDeviceVector{Float32, 1}, Base.Broadcast.Broadcasted{Nothing, Tuple{Base.OneTo{Int64}}, typeof(relu), Tuple{Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{1}, Nothing, typeof(+), Tuple{Base.Broadcast.Extruded{CuDeviceVector{Float32, 1}, Tuple{Bool}, Tuple{Int64}}, Base.Broadcast.Extruded{CuDeviceVector{Float32, 1}, Tuple{Bool}, Tuple{Int64}}}}}}, Int64}}}, mod::LLVM.Module, entry::LLVM.Function)
@ GPUCompiler C:\Users\aetan.julia\packages\GPUCompiler\XwWPj\src\ptx.jl:84
[4] irgen(job::GPUCompiler.CompilerJob, method_instance::Core.MethodInstance, world::UInt64)
@ GPUCompiler C:\Users\aetan.julia\packages\GPUCompiler\XwWPj\src\irgen.jl:62
[5] macro expansion
@ C:\Users\aetan.julia\packages\GPUCompiler\XwWPj\src\driver.jl:130 [inlined]
[6] macro expansion
@ C:\Users\aetan.julia\packages\TimerOutputs\LDL7n\src\TimerOutput.jl:252 [inlined]
[7] macro expansion
@ C:\Users\aetan.julia\packages\GPUCompiler\XwWPj\src\driver.jl:129 [inlined]
[8] emit_llvm(job::GPUCompiler.CompilerJob, method_instance::Any, world::UInt64; libraries::Bool, deferred_codegen::Bool, optimize::Bool, only_entry::Bool)
@ GPUCompiler C:\Users\aetan.julia\packages\GPUCompiler\XwWPj\src\utils.jl:62
[9] emit_llvm
@ C:\Users\aetan.julia\packages\GPUCompiler\XwWPj\src\utils.jl:60 [inlined]
[10] cufunction_compile(job::GPUCompiler.CompilerJob)
@ CUDA C:\Users\aetan.julia\packages\CUDA\M4jkK\src\compiler\execution.jl:305
[11] check_cache
@ C:\Users\aetan.julia\packages\GPUCompiler\XwWPj\src\cache.jl:44 [inlined]
[12] cached_compilation
@ C:\Users\aetan.julia\packages\GPUArrays\Z5nPF\src\host\broadcast.jl:57 [inlined]
[13] cached_compilation(cache::Dict{UInt64, Any}, job::GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams, GPUCompiler.FunctionSpec{GPUArrays.var"#broadcast_kernel#16", Tuple{CUDA.CuKernelContext, CuDeviceVector{Float32, 1}, Base.Broadcast.Broadcasted{Nothing, Tuple{Base.OneTo{Int64}}, typeof(relu), Tuple{Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{1}, Nothing, typeof(+), Tuple{Base.Broadcast.Extruded{CuDeviceVector{Float32, 1}, Tuple{Bool}, Tuple{Int64}}, Base.Broadcast.Extruded{CuDeviceVector{Float32, 1}, Tuple{Bool}, Tuple{Int64}}}}}}, Int64}}}, compiler::typeof(CUDA.cufunction_compile), linker::typeof(CUDA.cufunction_link))
@ GPUCompiler C:\Users\aetan.julia\packages\GPUCompiler\XwWPj\src\cache.jl:0
[14] cufunction(f::GPUArrays.var"#broadcast_kernel#16", tt::Type{Tuple{CUDA.CuKernelContext, CuDeviceVector{Float32, 1}, Base.Broadcast.Broadcasted{Nothing, Tuple{Base.OneTo{Int64}}, typeof(relu), Tuple{Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{1}, Nothing, typeof(+), Tuple{Base.Broadcast.Extruded{CuDeviceVector{Float32, 1}, Tuple{Bool}, Tuple{Int64}}, Base.Broadcast.Extruded{CuDeviceVector{Float32, 1}, Tuple{Bool}, Tuple{Int64}}}}}}, Int64}}; name::Nothing, kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ CUDA C:\Users\aetan.julia\packages\CUDA\M4jkK\src\compiler\execution.jl:294
[15] cufunction
@ C:\Users\aetan.julia\packages\CUDA\M4jkK\src\compiler\execution.jl:288 [inlined]
[16] macro expansion
@ C:\Users\aetan.julia\packages\CUDA\M4jkK\src\compiler\execution.jl:102 [inlined]
[17] #launch_heuristic#280
@ C:\Users\aetan.julia\packages\CUDA\M4jkK\src\gpuarrays.jl:17 [inlined]
[18] launch_heuristic
@ C:\Users\aetan.julia\packages\CUDA\M4jkK\src\gpuarrays.jl:17 [inlined]
[19] copyto!
@ C:\Users\aetan.julia\packages\GPUArrays\Z5nPF\src\host\broadcast.jl:63 [inlined]
[20] copyto!
@ .\broadcast.jl:936 [inlined]
[21] copy
@ C:\Users\aetan.julia\packages\GPUArrays\Z5nPF\src\host\broadcast.jl:47 [inlined]
[22] materialize
@ .\broadcast.jl:883 [inlined]
[23] (::Dense{typeof(relu), CuArray{Float32, 2}, CuArray{Float32, 1}})(x::CuArray{Float32, 1})
@ Flux C:\Users\aetan.julia\packages\Flux\qp1gc\src\layers\basic.jl:147
[24] applychain
@ C:\Users\aetan.julia\packages\Flux\qp1gc\src\layers\basic.jl:36 [inlined]
[25] (::Chain{Tuple{Dense{typeof(relu), CuArray{Float32, 2}, CuArray{Float32, 1}}, Dense{typeof(relu), CuArray{Float32, 2}, CuArray{Float32, 1}}, Dense{typeof(tanh), CuArray{Float32, 2}, CuArray{Float32, 1}}}})(x::CuArray{Float32, 1})
@ Flux C:\Users\aetan.julia\packages\Flux\qp1gc\src\layers\basic.jl:38
[26] action(s_norm::Vector{Float32})
@ Main c:\Users\aetan\Desktop\DQN_Shems\DQN_Test.jl:58
[27] episode!(env::Shems{Reinforce.ShemsEnv_DQN.ShemsState{Float32}}; NUM_STEPS::Int64, train::Bool, render::Bool, track::Int64, rng_ep::Int64)
@ Main c:\Users\aetan\Desktop\DQN_Shems\DQN_Test.jl:75
[28] top-level scope
@ c:\Users\aetan\Desktop\DQN_Shems\DQN_Reinforce_Shems.jl:67

Update your packages. There’s plenty of existing topics and issues mentioning this exact error, caused by an update of ExprTools.jl some months ago. An update of LLVM.jl should resolve this.

unfortunaly that didn`t work :confused:

I changed the “|> gpu” to |> cpu and now it works

Then you did not upgrade all packages, e.g., because some are being held back. Try forcibly installing the latest version of LLVM.jl using ]add LLVM@4.11.1 and inspecting why it fails.

If there are known incompatibilities they should be propagated to the registry by fixing compat bounds, so that the package manager won’t serve broken combinations of packages knowingly.

It’s not that simple, I think. LLVM.jl itself does not depend on ExprTools.jl, whose update introduced an exported function that clashes with one of LLVM.jl’s exports (Both ExprTools and LLVM export "parameters" · Issue #214 · JuliaGPU/GPUCompiler.jl · GitHub). So compat bounds would need to be introduced packages that depend on both LLVM and ExprTools, which seems impractical.