Using Parameters.jl with Distributed.jl makes an error

I am using the Distributed package in combination with Parameters. Initially, I simulated a model with 10 parameters, and Parameters were helpful in building the necessary functions. However, now I am trying to run the simulation with thousands of combinations of parameters. I thought the Distributed package was the way to go.

I have a simple example where I have a vector of [a,b,c], and I input it to the model and return a+b+c. However, what I am interested in is finding [a,b,c] such that the return value is closest to some value X (although this example may seem trivial).

What I do is generate random numbers in each worker, run the model multiple times in each worker, collect the results on the host, and pick the best one. However, if I simply use @everywhere in the two functions, it doesn’t work. Even if I add “global” in front of the parameter or “mutate” in front of the struct, it doesn’t work.

I would like to know if there is any way I can accomplish this task using Parameters.jl or another way to handle my parameters conveniently.

@with_kw struct Parameter
    a = 0.1
    b = 0.2
    c = 0.3
end

function model(x)
    param = Parameter(a = x[1], b = x[2], c = x[3]) 
    a,b,c=param
    return a+b+c
   
end

Maybe it would help to see the code which is not working. My guess is that maybe the randomisation was not done properly in the distributed workers. (Normally, Parameters.jl and Distributed.jl should work together without problems. Anyway, you might want to check out NamedTuples or Base.@kw_def, which can be used instead of Parameters.jl.)

Going a bit in an orthogonal direction. In my experience it is often easier to parallelise tasks like yours with Threads instead of Distributed.
Especially if you have a nicely defined function model(x) which does not mess-up any global variables, then you could do something like this:

function model(x) 
    param = (a = x[1], b = x[2], c = x[3])     
    (;a, b, c) = param                        # or just write (a,b,c) = x
    return abs(a+b+c - 1.0)                   # some random goal function
end

n_samples = 1000

inputs = randn(n_samples, 3)
results = zeros(n_samples)

Threads.@threads for i in 1:100
    results[i] = model(inputs[i,:])
end

model_min, i_min= findmin(results)
x_min = inputs[i_min,:]
1 Like

Thank you very much for leaving the comments.
Ultimately, I truncated my codes into pieces and examined which part was causing the endless computation.

  1. Parameters package does work with Distributed. You were entirely correct.

  2. Essentially, I was using a Nelder Mead multi-start algorithm to minimize a loss function with respect to a solution of NLsolve. I solve a system of non-linear equations that takes around 5 seconds for each parameter combination.

  3. I fixed 7 parameters and left only one parameter to be chosen, and I imposed a very tight lower and upper bound. However, the minimization never ends.

I’m trying to figure out how to solve this part. I agree with your point, but I’m reluctant to write down the codes to replace a package I currently use.