Free GPU resources?

Hi,
I’m running the same Julia code using CuArrays etc in two machines, one at home and one at work. At home my code works well, this is a 16Gb RAM, Ubuntu 18.04 linux box with a Geoforce 1050Ti, 4Gb RAM. On the computer at wok I have a Xeon with 32Gb RAM, same ubuntu, and a GPU (don’t remember the model) that has 6Gb. But in this case running the exact same code says

CUDA error: too many resources requested for launch (code 701, ERROR_LAUNCH_OUT_OF_RESOURCES)

Stacktrace:
[1] throw_api_error(::CUDAdrv.cudaError_enum) at /home/mazzanti/.julia/packages/CUDAdrv/Uc14X/src/error.jl:105
[2] macro expansion at /home/mazzanti/.julia/packages/CUDAdrv/Uc14X/src/error.jl:112 [inlined]
[3] cuLaunchKernel(::CuFunction, ::UInt32, ::UInt32, ::UInt32, ::UInt32, ::UInt32, ::UInt32, ::Int64, ::CuStream, ::Array{Ptr{Nothing},1}, ::Ptr{Nothing}) at /home/mazzanti/.julia/packages/CUDAapi/XuSHC/src/call.jl:93
[4] (::CUDAdrv.var"#566#567"{Bool,Int64,CuStream,CuFunction})(::Array{Ptr{Nothing},1}) at /home/mazzanti/.julia/packages/CUDAdrv/Uc14X/src/execution.jl:67
[5] macro expansion at /home/mazzanti/.julia/packages/CUDAdrv/Uc14X/src/execution.jl:33 [inlined]
[6] pack_arguments(::CUDAdrv.var"#566#567"{Bool,Int64,CuStream,CuFunction}, ::CuDeviceArray{Float64,2,CUDAnative.AS.Global}, ::CuDeviceArray{Float64,2,CUDAnative.AS.Global}) at /home/mazzanti/.julia/packages/CUDAdrv/Uc14X/src/execution.jl:10
[7] launch(::CuFunction, ::CuDeviceArray{Float64,2,CUDAnative.AS.Global}, ::Vararg{CuDeviceArray{Float64,2,CUDAnative.AS.Global},N} where N; blocks::Int64, threads::Int64, cooperative::Bool, shmem::Int64, stream::CuStream) at /home/mazzanti/.julia/packages/CUDAdrv/Uc14X/src/execution.jl:60
[8] #571 at /home/mazzanti/.julia/packages/CUDAdrv/Uc14X/src/execution.jl:136 [inlined]
[9] macro expansion at /home/mazzanti/.julia/packages/CUDAdrv/Uc14X/src/execution.jl:95 [inlined]
[10] convert_arguments at /home/mazzanti/.julia/packages/CUDAdrv/Uc14X/src/execution.jl:78 [inlined]
[11] #cudacall#570 at /home/mazzanti/.julia/packages/CUDAdrv/Uc14X/src/execution.jl:135 [inlined]
[12] #cudacall#120 at /home/mazzanti/.julia/packages/CUDAnative/e0IdN/src/execution.jl:217 [inlined]
[13] macro expansion at /home/mazzanti/.julia/packages/CUDAnative/e0IdN/src/execution.jl:198 [inlined]
[14] #call#104 at /home/mazzanti/.julia/packages/CUDAnative/e0IdN/src/execution.jl:170 [inlined]
[15] #_#123 at /home/mazzanti/.julia/packages/CUDAnative/e0IdN/src/execution.jl:345 [inlined]
[16] macro expansion at /home/mazzanti/.julia/packages/CUDAnative/e0IdN/src/execution.jl:109 [inlined]
[17] Log_Unnorm_Probs_CUDA(::CuArray{Float64,2,Nothing}, ::CuArray{Float64,2,Nothing}, ::RBM_net{Float64}, ::Binary_01) at /home/mazzanti/Julia_1/Modules/CUDA/RBM_module_CUDA_J1.jl:152
[18] Log_Z_CUDA(::CuArray{Float64,2,Nothing}, ::RBM_net{Float64}) at /home/mazzanti/Julia_1/Modules/CUDA/RBM_module_CUDA_J1.jl:185
[19] Log_Z_CUDA(::RBM_net{Float64}, ::Int64) at /home/mazzanti/Julia_1/Modules/CUDA/RBM_module_CUDA_J1.jl:300
[20] Log_Z_CUDA(::RBM_net{Float64}; Mblock::Int64) at /home/mazzanti/Julia_1/Modules/CUDA/RBM_module_CUDA_J1.jl:219
[21] Log_Z_CUDA(::RBM_net{Float64}) at /home/mazzanti/Julia_1/Modules/CUDA/RBM_module_CUDA_J1.jl:210
[22] top-level scope at ./In[18]:33

is there a way to fix that? to free resources from within the code?

Best regards and thanks,

Ferran.

Use the occupancy API to determine a legal launch configuration. Not all GPUs allow equal amount of threads. There aren’t resources to be freed, these are device limits.