When using remotecall the following behavior is observed on Julia stable v0.6.2:
julia> addprocs(1)
1-element Array{Int64,1}:
2
julia> wait(@spawnat 2 global x = 0)
Future(2, 1, 3, Nullable{Any}())
julia> f = ()->println(x)
(::#5) (generic function with 1 method)
julia> function g()
println(x)
end
g (generic function with 1 method)
julia> remotecall_wait(f, 2)
From worker 2: 0
Future(2, 1, 6, Nullable{Any}())
julia> remotecall_wait(g, 2)
ERROR: On worker 2:
UndefVarError: #g not defined
deserialize_datatype at ./serialize.jl:973
handle_deserialize at ./serialize.jl:677
deserialize at ./serialize.jl:637
handle_deserialize at ./serialize.jl:684
deserialize_msg at ./distributed/messages.jl:98
message_handler_loop at ./distributed/process_messages.jl:161
process_tcp_streams at ./distributed/process_messages.jl:118
#99 at ./event.jl:73
Stacktrace:
[1] #remotecall_wait#146(::Array{Any,1}, ::Function, ::Function, ::Base.Distributed.Worker) at ./distributed/remotecall.jl:382
[2] remotecall_wait(::Function, ::Base.Distributed.Worker) at ./distributed/remotecall.jl:373
[3] #remotecall_wait#149(::Array{Any,1}, ::Function, ::Function, ::Int64) at ./distributed/remotecall.jl:394
[4] remotecall_wait(::Function, ::Int64) at ./distributed/remotecall.jl:394
So there are at least two issues here:
- Lexical scoping does not apply to anonymous functions called remotely, which should be documented.
- functions are not as first-class as one expect them to be because
g
cannot be remotely called likef
.
I will advice to keep the apparent dynamic scoping behavior and improve documentation, unless the same functionality can be achieved otherwise.
Maybe off topic, but I found the builtin parallelism has somewhat sub-optimal performance by design compared to MPI base implementation ( remote calls has to be serialized, for one example, which hurts if you need fine-grained control of your workers ). But AFAIK currently MPI.jl cannot make MPI calls from an REPL (unless in another worker process through MPIManager, but what’s not the typical REPL-convinience on would expect…).
Given that Julia has a strong focus on numerical computing, it is probable that quite a portion of its user base are comfortable using MPI. What is the rationale behind choosing libuv over MPI as the backend of the builtin parallelism in Julia?