Activating environment on multiple SGE_cluster nodes fails

Hello,

I have this pipeline to run a simulation on a SGE cluster (see below).
Basically I activate the same environment @everywhere and later load packages from the env.

The problem is that now Julia fails in activating the environment

using Pkg
PATH  = joinpath(@__DIR__,"cluster_test")
Pkg.activate(PATH)
using ClusterManagers, Distributed, Logging
@info  "Activating: $PATH"

addprocs_sge(5)
@everywhere using Distributed, Pkg
@everywhere Pkg.activate($PATH)

for worker in workers()
    @spawnat worker (print("$myid()"))
end

The script return this error on @everywhere Pkg.activate($PATH)

$ julia _research/runtests.jl                                                 [11:03:24]
  Activating project at `~/Documents/Research/phd_project/spiking/Tripod/_research/cluster_test`
[ Info: Activating: /home/alequa/Documents/Research/phd_project/spiking/Tripod/_research/cluster_test
Job 3940105 in queue.
Running.
  Activating project at `~/Documents/Research/phd_project/spiking/Tripod/_research/cluster_test`
ERROR: LoadError: On worker 5:
IOError: close: Unknown system error -116 (Unknown system error -116)
Stacktrace:
  [1] uv_error
    @ ./libuv.jl:100 [inlined]
  [2] close
    @ ./filesystem.jl:141 [inlined]
  [3] close
    @ ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/FileWatching/src/pidfile.jl:340
  [4] #mkpidlock#7
    @ ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/FileWatching/src/pidfile.jl:95
  [5] mkpidlock
    @ ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/FileWatching/src/pidfile.jl:90 [inlined]
  [6] mkpidlock
    @ ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/FileWatching/src/pidfile.jl:88 [inlined]
  [7] write_env_usage
    @ ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/Pkg/src/Types.jl:539
  [8] EnvCache
    @ ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/Pkg/src/Types.jl:377
  [9] EnvCache
    @ ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/Pkg/src/Types.jl:356 [inlined]
 [10] add_snapshot_to_undo
    @ ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/Pkg/src/API.jl:2189
 [11] add_snapshot_to_undo
    @ ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/Pkg/src/API.jl:2185 [inlined]
 [12] #activate#310
    @ ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/Pkg/src/API.jl:1973
 [13] activate
    @ ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/Pkg/src/API.jl:1932
 [14] top-level scope
    @ none:1
 [15] eval
    @ ./boot.jl:385
 [16] #invokelatest#2
    @ ./essentials.jl:892
 [17] invokelatest
    @ ./essentials.jl:889
 [18] #114
    @ ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/Distributed/src/process_messages.jl:303
 [19] run_work_thunk
    @ ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/Distributed/src/process_messages.jl:70
 [20] run_work_thunk
    @ ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/Distributed/src/process_messages.jl:79
 [21] #100
    @ ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/Distributed/src/process_messages.jl:88

...and 1 more exception.

Stacktrace:
 [1] sync_end(c::Channel{Any})
   @ Base ./task.jl:448
 [2] macro expansion
   @ ./task.jl:480 [inlined]
 [3] remotecall_eval(m::Module, procs::Vector{Int64}, ex::Expr)
   @ Distributed ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/Distributed/src/macros.jl:219
 [4] top-level scope
   @ ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/Distributed/src/macros.jl:203
in expression starting at /home/alequa/Documents/Research/phd_project/spiking/Tripod/_research/runtests.jl:10
FAIL: 1

With this TOML

name = "cluster_test"

[deps]
ClusterManagers = "34f1f09b-3a8b-5176-ab39-66d58a4d544e"
DrWatson = "634d3b9d-ee7a-5ddf-bec9-22491ea816e1"

[compat]
DrWatson = "2.15.0"
julia = "1.10.4"

Because with a few workers it was working, I inferred it was a problem of concurrent access; running one activate at time with the following code works.

for worker in workers()
    r = @spawnat worker (Pkg.activate(PATH))
    fetch(r)
end