Namespace problems when running Optim in parallel?

question

#1

Hi,
I keep running into errors such as:

ERROR: LoadError: On worker 2:
MethodError: no method matching minimizer(::Optim.MultivariateOptimizationResults{Float64,1,Optim.NewtonTrustRegion{Float64}})

or, the comical

ERROR: LoadError: On worker 2:
MethodError: Cannot convert an object of type Optim.MultivariateOptimizationResults{Float64,1,Optim.NewtonTrustRegion{Float64}} to an object of type Optim.MultivariateOptimizationResults{Float64,1,Optim.NewtonTrustRegion{Float64}}

Basic description:
I have a function within a parallel loop that includes the lines

future_Θmin_GP::Future = @spawn trust_shrink(x -> neg_log_likelihood_r(x, data, process_data_vector_gp, prior), rGP_est_TFFEMANOVA(data, prior))
future_Θmin_rG::Future = @spawn trust_shrink(x -> neg_log_likelihood_r(x, data, process_data_vector_rg, prior), rrG_est_TFFEMANOVA(data, prior))

Θ_hat_sol_GP::Optim.MultivariateOptimizationResults{Float64,1,Optim.NewtonTrustRegion{Float64}} = fetch(future_Θmin_GP)
Θ_hat_sol_rG::Optim.MultivariateOptimizationResults{Float64,1,Optim.NewtonTrustRegion{Float64}} = fetch(future_Θmin_rG)

Θ_hat_GP::Array{Float64,1} = Optim.minimizer(Θ_hat_sol_GP)[1:end-1]
Θ_hat_rG::Array{Float64,1} = Optim.minimizer(Θ_hat_sol_rG)[1:end-1]

trust_shrink is a wrapper for a call to Optim.

To get the convert error, I run the code as above. Removing the type statements on the fetch calls results in the first error.

I would like to find a workaround, but I imagine something isn’t working as intended on Optim’s side of things, that certain functions aren’t being loaded into the namespace that really ought to be?
[EDIT: My workaround is forgetting about the inner parallelization, and simply spawning twice the processes in the parallel loop. The outer loop is for a simulation, intending to compare the performance of a variety of different functions. As the primary purpose for most of these functions is use in simulations (that are going to be parallel anyway), being serial should actually lead to better performance. I will probably just add a parallel=false argument to give the option.]

I’m also running

@everywhere include("file.jl")

where “file.jl” says (among other things)

using Optim

So I am expecting Optim to be loaded into the scope of each worker.