No, at that point you should probably use the interface to write a custom loss function.
Thanks -
would this make sense to you?
L2(t, dat) = sum(L2Loss(t[i],dat[i]) for i in length(t))
A
no, look at the docs: http://docs.juliadiffeq.org/latest/analysis/parameter_estimation.html#The-Loss-Function-1
L2(sol) = sum(abs2,dat[i] - sol(t[i]))`
for example. So now any function on the solution, write the one you need.
So I ended adapting your L2Loss function to do what I wanted -
here is the code, as it belongs to you as much or more than it does to me!
Thanks for the help
struct L2Loss2{T,D,W} <: DiffEqBase.DECostFunction
t::T
data::D
data_weight::W
end
function (f::L2Loss2)(sol::DiffEqBase.DESolution)
data = f.data
weight = f.data_weight
if sol.retcode != :Success
return Inf
end
sumsq = 0.0
if weight == nothing
println(length(sol), length(data))
@inbounds for i in 1:length(data)
# println(data[i], " ", length(data[i]))
# println(sol(t[i])[i,:], " ", length(sol(t[i])[i,:]))
for j in 2:length(data[i])
# println(data[i][j]," ",sol(t[i])[i,:][j] )
# println(data[i][j] == sol(t[i])[i,:][j])
sumsq +=(data[i][j] - sol(t[i])[i,:][j])^2
end
end
else
@inbounds for i in 1:length(data)
if typeof(weight) <: Real
for j in 2:length(data[i])
sumsq = sumsq + ((data[i][j] - sol(t[i])[i,:][j])^2)*weight
end
else
for j in 2:length(data[i])
sumsq = sumsq + ((data[i][j] - sol(t[i])[i,:][j])^2)*weight[i,j]
end
end
end
end
sumsq
end
# Cost functions are written assuming a data matrix
# Turn vectors into a 1xN matrix
matrixize(x) = typeof(x) <: Vector ? reshape(x,1,length(x)) : x
L2Loss2(t,data;data_weight=nothing) = L2Loss2(t,matrixize(data),
matrixize(data_weight))
First time i get to use struct
Thanks,
A
Hi Chris -
I been trying to parallelize the process above - and it works pretty well when I script it, but it does not when i write it as a function - it seems that I can not distribute function arguments to run internal functions in parallel - this is the simples example I can think off (Iāve been trying to use your ParallelDataTransfer.jl) package:
What I would like to do is make function arguments available to all processes within a function -
ie
function(a)
@everywhere a=$a
@everywhere println(a)
end
but this does not work - below I have two examples- I appreciate any light you may shed, Iāve been looking at this for a couple of days now, and I am pretty baffled.
@everywhere function test(ag)
sendto(workers(),ag=ag)
@everywhere println(ag)
end
test(5)
This is the trace:
5
On worker 2:
UndefVarError: ag not defined
top-level scope at none:0
eval at ./boot.jl:319
#116 at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/process_messages.jl:276
run_work_thunk at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/process_messages.jl:56
run_work_thunk at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/process_messages.jl:65
#102 at ./task.jl:259
#remotecall_wait#154(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::Function, ::Function, ::Distributed.Worker, ::Module, ::Vararg{Any,N} where N) at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/remotecall.jl:407
remotecall_wait(::Function, ::Distributed.Worker, ::Module, ::Vararg{Any,N} where N) at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/remotecall.jl:398
#remotecall_wait#157(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::Function, ::Function, ::Int64, ::Module, ::Vararg{Any,N} where N) at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/remotecall.jl:419
remotecall_wait(::Function, ::Int64, ::Module, ::Vararg{Any,N} where N) at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/remotecall.jl:419
(::getfield(Distributed, Symbol(ā##163#165ā)){Module,Expr})() at ./task.jl:259
ā¦and 16 more exception(s).
Stacktrace:
[1] sync_end(::Array{Any,1}) at ./task.jl:226
[2] remotecall_eval(::Module, ::Array{Int64,1}, ::Expr) at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/macros.jl:207
[3] macro expansion at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/macros.jl:190 [inlined]
[4] test(::Int64) at ./In[61]:4
[5] top-level scope at In[61]:7
when I modify function:
@everywhere function test2(ag3)
println(ag3)
@everywhere local ag4=ag3
println(ag4)
@everywhere println(ag4)
end
test2(5)
same issues:
n worker 2:
UndefVarError: ag3 not defined
top-level scope at none:0
eval at ./boot.jl:319
#116 at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/process_messages.jl:276
run_work_thunk at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/process_messages.jl:56
run_work_thunk at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/process_messages.jl:65
#102 at ./task.jl:259
#remotecall_wait#154(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::Function, ::Function, ::Distributed.Worker, ::Module, ::Vararg{Any,N} where N) at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/remotecall.jl:407
remotecall_wait(::Function, ::Distributed.Worker, ::Module, ::Vararg{Any,N} where N) at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/remotecall.jl:398
#remotecall_wait#157(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::Function, ::Function, ::Int64, ::Module, ::Vararg{Any,N} where N) at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/remotecall.jl:419
remotecall_wait(::Function, ::Int64, ::Module, ::Vararg{Any,N} where N) at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/remotecall.jl:419
(::getfield(Distributed, Symbol(ā##163#165ā)){Module,Expr})() at ./task.jl:259
ā¦and 16 more exception(s).
Stacktrace:
[1] sync_end(::Array{Any,1}) at ./task.jl:226
[2] remotecall_eval(::Module, ::Array{Int64,1}, ::Expr) at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/macros.jl:207
[3] macro expansion at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/macros.jl:190 [inlined]
[4] test2(::Int64) at ./In[108]:3
[5] top-level scope at In[108]:8
Do you have any idea on how to fix this issue- I want to make my function arguments available to every process, but I canāt seem to make the processes see them -
Thaks
I feel like Iām wading into a private conversation here, but anyway: Your code snippets donāt account to an MWE, so itās hard to say whatās going wrong. This works:
julia> using Distributed
julia> addprocs(2)
2-element Array{Int64,1}:
2
3
julia> function printeverywhere(a)
@everywhere a = $a
@everywhere println(a)
end
printeverywhere (generic function with 1 method)
julia> printeverywhere(5)
5
From worker 2: 5
From worker 3: 5
Thanks so much -
no private at all. Itās just Chris has been extremely helpful with DAE and fitting advice-
A
by the way what does MWE stand for? English is not my first language- so I still have issues with acronyms-
Thanks again
Apologies I was actually going to link that acronym to this post which explains it in a bit more detail: Minimum Working Example, which means basically a code snippet that contains as little code as possible, and allows others to copy/paste it to run on their local machine and replicate your issue.
This helps in three ways:
- Often your problem disappears when you attempt to construct the MWE, as you strip away unnecessary parts of your code and mode clearly isolate the issue yourself;
- If others fail to replicate your issue, it can then point to problems with your installation/Julia version/hardware etc.; and
- Others can play around with different possible solutions much more easily if they can actually run your code.
I see -
My issue is that I couldnāt move past this stage, so just getting the variable to be seen by all processes as in your code was the solution (I hope) to y issues!
@everywhere function fit_to_data(model,IC,tspan, t, data, lower, upper, varnum, algnum=0;kwargs...)
@everywhere model = $model
@everywhere IC = $IC
@everywhere tspan = $tspan
@everywhere t = $t
@everywhere data = $data
@everywhere println(data)
@every0hwere lower=$lower
@everywhere upper=$upper
@everywhere varnum=$varnum
@everywhere algnum=$algnum
end
UndefVarError: data not defined
Stacktrace:
[1] top-level scope at /opt/julia-1.0.0/share/julia/stdlib/v1.0/Distributed/src/macros.jl:189
[2] top-level scope at In[129]:1
once I remove @everywhere from in front the function name - following your example my variables are seen and I can use my function - so you did solve my issueā¦ Not sure why this works like this, but I have not doe much parallel programming before, so I am not always certain how to make information available to the correct processes.
Thanks again.
A
so continuing with the conversation before -
I still canāt figure out how to solve my issue, I can parallelize processes in script form, but not in function form -
For example:
@everywhere function g(du,u,p,t)
Ļ,Ļ,Ī² = p
x,y,z = u
du[1] = dx = Ļ*(y-x)
du[2] = dy = x*(Ļ-z) - y
du[3] = dz = x*y - Ī²*z
end
@everywhere l2=Float64.([0, 0, 0])
@everywhere u2=Float64.([20, 30, 10])
@everywhere u0 = [1.0;0.0;0.0]
@everywhere t = 0.0:0.05:1.0
@everywhere tspan = (0.0,1.0)
@everywhere model_ode(p_) = ODEProblem(g, u0, tspan,p_)
@everywhere solve_model(mp_,s, i) = DifferentialEquations.solve(model_ode(mp_), Tsit5(),saveat=s, save_idxs=i)
@everywhere mock_data = Array(solve_model([10.0,28.0,8/3],0.05,[1,2,3]))
@everywhere function L(t,dat,w=nothing)
return L2Loss(t,dat;data_weight=w)
end
@everywhere loss_objective(mp_, dat)=build_loss_objective(model_ode(mp_), Tsit5(), L(t,dat))
@everywhere bbobj(args) = loss_objective(args, mock_data)(args)
opts = bbsetup(bbobj; Method=:separable_nes, SearchRange = collect(zip(l2,u2)),
NumDimensions = 3, MaxFuncEvals = 50000, workers=workers())
bb3=bboptimize(opts);
will run the optimization in parallel -
my issue happend when I wan to wrap this script in a function - and use modl, data, etc as parameters
As you see every variable is defined everywhere in the script
if I do
#using mock_data from above
@everywhere function f(model,data)
@everywhere l2=Float64.([0, 0, 0])
@everywhere u2=Float64.([20, 30, 10])
@everywhere u0 = [1.0;0.0;0.0]
@everywhere t = 0.0:0.05:1.0
@everywhere tspan = (0.0,1.0)
@everywhere model_ode(p_) = ODEProblem(model, u0, tspan,p_)
Tsit5(),saveat=s, save_idxs=i)
@everywhere L(t,dat,w=nothing) =L2Loss(t,dat;data_weight=w)
@everywhere loss_objective(mp_, dat)=build_loss_objective(model_ode(mp_), Tsit5(), L(t,dat))
@everywhere bbobj(args) = loss_objective(args, data)(args)
opts = bbsetup(bbobj; Method=:separable_nes, SearchRange = collect(zip(l2,u2)),
NumDimensions = 3, MaxFuncEvals = 50000, workers=workers())
bb3=bboptimize(opts);
end
f(g,mock_data)
I get errors where model and data are not recognized and if I skip the @everyhwere in front of f it runs in only one process -
My goal is to have the optimization to run in parallel, for any model, dataset I want to pass, but not sure how to do that-
Thanks,
A
I think there is misunderstanding about parallelizing by multiprocessing. First of all, if everything is in a function one @everywhere
at the beginning is enough. One way to parallelize computation is by computing gradient values of your model(g
) in different processes by spawning. Another is that you may want to process more than one model&data pair by spawning f
with different arguments in each process. Either way, @everywhere
is not magic doing it for you. Here, it only defines methods or variables. I suggest reading related references from documentation.
Thanks,
Iāll double check-
Itās been working well for other functions thatās were the confusion arose from. (I check with top how many processes are running)-and for simpler functions I get good distribution of the processes. Iāll check how to improve it.
Thanks again.
A