Dual number is unidentified by OpenModelica while using user-defined function with autodiff in JuMP!

Hi,
I am trying to solve a simple set point tracking MPC using JuMP.
The model I am trying is

\frac{dx}{dt} =a+bsin(x).

There is no issues in JuMP while discretizing it in Julia. However, when the model is used from the external simulator (I am using Modelica models from OpenModelica using OMJulia API) the dual number can not be passed as a normal floating number to the external stimulator.

There are two blocks of code.

1. Using discretization in Julia (No problem please see the optimal tracking figure below)

# reference signal
rN=zeros(100);
rN[1:30].=2.;rN[31:60].=3.;rN[61:80].=4.;rN[81:100].=2.
plot(rN)

function run_mpc(x0,iterator) 
    model = Model(with_optimizer(Ipopt.Optimizer, print_level=0))
    Np = 3;p=1;
    function func_(a_,b_,x_)
        return x_+dt*(a_+b_*sin(x_))
    end
    @variables model begin
        # state
        x[1:Np]
        # inputs
        a[1:Np]
        b[1:Np] 
    end
    JuMP.register(model, :func_, 3, func_, autodiff=true)
    
    @NLconstraint(model, [k=1:Np-1],x[k+1]==func_(a[k],b[k],x[k]))
    @NLobjective(model, Min, 
       0.5 * sum((x[k]-rN[k+iterator])^2+p*(a[k]^2+b[k]^2) for k=1:Np))

    # initial conditions
    @constraint(model,x[1] == x0);
    # constraint on changes in b
    @constraint(model, [k=1:Np-1],-10<=b[k+1]-b[k]<=10);
    optimize!(model)
    # Extract the solutions from the model
    x = JuMP.value.(x)
    a = JuMP.value.(a)
    b = JuMP.value.(b)
    return x,a,b
end

# initial conditions
x0 = 1.;
N = 80;dt = 1; 
x_opt = zeros(N+1);
x_opt[1:1] .= 1
a_opt = zeros(N+1);
a_opt[1:1] .= 1
b_opt = zeros(N+1);
b_opt[1:1] .= 1
for i in 1:N
    # run MPC
    x_,a_,b_ = run_mpc(x0,i)
    # saving for plottings
    # first optimal a and b
    a_opt[i+1:i+1] .= a_[1]
    b_opt[i+1:i+1] .= b_[1];
    # update states
    x0 = x0+dt*(a_[1]+b_[1]*sin(x0))
    x_opt[i+1:i+1] .= x0
end

plot(x_opt)
plot!(rN)

The optimal tracking is shown below
download

2. Using discretization from the external simulator(Problem occurs that dual number is unrecognized by the simulator)

Now when I try to use func_ function to run OpenModelica and set parameters the dual number can not be inserted.

function func_(a_,b_,x_)
  omc.setParameters(["x0=$(x_)","a=$(a_)","b=$(b_)"])
  omc.simulate()
  xm=omc.getSolutions("x")
  xg = xm[1]
  return xg[end]
end

The error is shown below:

failed process: Process(`C:/Users/bruker/AppData/Local/Temp/jl_xrbq6i/FirstOrder.exe '-
override=stopTime=1,x0=Dual{ForwardDiff.Tag{JuMP.var"#105#107"{var"#func_#914"},Float64}}
(0.0,0.0,0.0,1.0),b=Dual{ForwardDiff.Tag{JuMP.var"#105#107"{var"#func_#914"},Float64}}
(0.0,0.0,1.0,0.0),stepSize=0.1,a=Dual{ForwardDiff.Tag{JuMP.var"#105#107"
{var"#func_#914"},Float64}}(0.0,1.0,0.0,0.0),tolerance=1e-08'`, ProcessExited(3221225477)) 
[3221225477]


Stacktrace:
 [1] pipeline_error at .\process.jl:525 [inlined]
 [2] run(::Base.CmdRedirect; wait::Bool) at .\process.jl:440
 [3] run at .\process.jl:438 [inlined]
 [4] (::OMJulia.var"#22#68"{OMJulia.OMCSession})() at C:\Users\bruker\.julia\packages\OMJulia\ZLXEs\src\OMJulia.jl:442
 [5] (::var"#func_#914")(::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#105#107"{var"#func_#914"},Float64},Float64,3}, ::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#105#107"{var"#func_#914"},Float64},Float64,3}, ::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#105#107"{var"#func_#914"},Float64},Float64,3}) at .\In[269]:6
 [6] #105 at C:\Users\bruker\.julia\packages\JuMP\YXK4e\src\nlp.jl:1180 [inlined]
 [7] vector_mode_dual_eval at C:\Users\bruker\.julia\packages\ForwardDiff\CrVlm\src\apiutils.jl:37 [inlined]
 [8] vector_mode_gradient!(::SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true}, ::JuMP.var"#105#107"{var"#func_#914"}, ::SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true}, ::ForwardDiff.GradientConfig{ForwardDiff.Tag{JuMP.var"#105#107"{var"#func_#914"},Float64},Float64,3,Array{ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#105#107"{var"#func_#914"},Float64},Float64,3},1}}) at C:\Users\bruker\.julia\packages\ForwardDiff\CrVlm\src\gradient.jl:103
 [9] gradient! at C:\Users\bruker\.julia\packages\ForwardDiff\CrVlm\src\gradient.jl:35 [inlined]
 [10] gradient! at C:\Users\bruker\.julia\packages\ForwardDiff\CrVlm\src\gradient.jl:33 [inlined]
 [11] (::JuMP.var"#106#108"{JuMP.var"#105#107"{var"#func_#914"},ForwardDiff.GradientConfig{ForwardDiff.Tag{JuMP.var"#105#107"{var"#func_#914"},Float64},Float64,3,Array{ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#105#107"{var"#func_#914"},Float64},Float64,3},1}}})(::SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true}, ::SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true}) at C:\Users\bruker\.julia\packages\JuMP\YXK4e\src\nlp.jl:1182
 [12] eval_objective_gradient(::JuMP._UserFunctionEvaluator, ::SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true}, ::SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true}) at C:\Users\bruker\.julia\packages\JuMP\YXK4e\src\nlp.jl:1175
 [13] forward_eval(::Array{Float64,1}, ::Array{Float64,1}, ::Array{JuMP._Derivatives.NodeData,1}, ::SparseArrays.SparseMatrixCSC{Bool,Int64}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::JuMP._Derivatives.UserOperatorRegistry) at C:\Users\bruker\.julia\packages\JuMP\YXK4e\src\_Derivatives\forward.jl:165
 [14] _forward_eval_all(::NLPEvaluator, ::Array{Float64,1}) at C:\Users\bruker\.julia\packages\JuMP\YXK4e\src\nlp.jl:506
 [15] macro expansion at C:\Users\bruker\.julia\packages\JuMP\YXK4e\src\nlp.jl:550 [inlined]
 [16] macro expansion at .\util.jl:234 [inlined]
 [17] eval_objective_gradient(::NLPEvaluator, ::Array{Float64,1}, ::Array{Float64,1}) at C:\Users\bruker\.julia\packages\JuMP\YXK4e\src\nlp.jl:548
 [18] eval_objective_gradient(::Ipopt.Optimizer, ::Array{Float64,1}, ::Array{Float64,1}) at C:\Users\bruker\.julia\packages\Ipopt\bYzBL\src\MOI_wrapper.jl:864
 [19] (::Ipopt.var"#eval_grad_f_cb#47"{Ipopt.Optimizer})(::Array{Float64,1}, ::Array{Float64,1}) at C:\Users\bruker\.julia\packages\Ipopt\bYzBL\src\MOI_wrapper.jl:1060
 [20] eval_grad_f_wrapper(::Int32, ::Ptr{Float64}, ::Int32, ::Ptr{Float64}, ::Ptr{Nothing}) at C:\Users\bruker\.julia\packages\Ipopt\bYzBL\src\Ipopt.jl:167
 [21] solveProblem(::IpoptProblem) at C:\Users\bruker\.julia\packages\Ipopt\bYzBL\src\Ipopt.jl:361
 [22] optimize!(::Ipopt.Optimizer) at C:\Users\bruker\.julia\packages\Ipopt\bYzBL\src\MOI_wrapper.jl:1178
 [23] optimize!(::MathOptInterface.Bridges.LazyBridgeOptimizer{Ipopt.Optimizer}) at C:\Users\bruker\.julia\packages\MathOptInterface\bygN7\src\Bridges\bridge_optimizer.jl:239
 [24] optimize!(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}) at C:\Users\bruker\.julia\packages\MathOptInterface\bygN7\src\Utilities\cachingoptimizer.jl:189
 [25] optimize!(::Model, ::Nothing; bridge_constraints::Bool, ignore_optimize_hook::Bool, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at C:\Users\bruker\.julia\packages\JuMP\YXK4e\src\optimizer_interface.jl:131
 [26] optimize! at C:\Users\bruker\.julia\packages\JuMP\YXK4e\src\optimizer_interface.jl:107 [inlined] (repeats 2 times)
 [27] run_mpc(::Float64, ::Int64) at .\In[269]:28
 [28] top-level scope at .\In[270]:13 [3221225477]

The Modelica model I am using is FirstOrder.mo

model FirstOrder
  parameter Real x0 = 1;
  parameter Real a = -1;
  parameter Real b = 2;
  Real x(start=x0);
equation
  der(x) = a+b*sin(x);
end FirstOrder;

This model can be accessed using OMJulia as,

using OMJulia
omc= OMJulia.OMCSession();
omc.ModelicaSystem(raw"Path`\To`\File\FirstOrder.mo","FirstOrder");   
omc.setSimulationOptions(["stopTime=1","stepSize=0.1","tolerance=1e-08"])    

Question: In the error shown above (failed process:…) how can i convert a_, b_ and x_ so that they can be assigned to a, b and x0 in the Modelica model?

You cannot automatically differentiate through an external program. You will have to provide explicit gradients.

3 Likes

Thanks Oscar. Is there any example of user defined functions in nonlinear JuMP where we can provide explicit gradient. Can you show me small routine for that?

See https://jump.dev/JuMP.jl/stable/nlp/#User-defined-Functions-1

Thanks, Oscar! I studied it but it uses autodiff = :true (which will use auto differentiation I guess with dual numbers (I am quite bad with these numerical methods)). I just wanted to make a function so that it won’t use a dual number. I did not quite understand whether it is possible or not. See the picture… (https://jump.dev/JuMP.jl/stable/nlp/#User-defined-Functions-1) Does that mean we can define our own function and its gradient? Optim.jl, LsqFit.jl actually optimized parameter of the external simulator. (I wanted to make such routine).

See further down for the f_prime example. Here is a clearer example:

using JuMP, Ipopt
f(x, y) = (x - 1)^2 + y^2
function ∇f(g, x, y)
    g[1] = 2 * (x - 1)
    g[2] = 2 * y
    return
end
model = Model(Ipopt.Optimizer)
@variable(model, x >= 0)
@variable(model, y >= 1)
register(model, :f, 2, f, ∇f)
@NLobjective(model, Min, f(x, y))
optimize!(model)
objective_value(model)
3 Likes

Wow! I overlooked this example. Thanks, a godsend!

1 Like