Problems with Dual type using JuMP, NLopt and DifferentialEquations

Hello Julia community. I am using Julia v.0.6.4 and trying to understand how an optimization works in Julia, using JuMP.jl and NLopt.jl. I am also using DifferentialEquations.jl to specify the objective function. And now, I’ve got stuck with the Dual format that is probably used to define a gradient. Here is an example:

using JuMP
using NLopt
using DifferentialEquations

function f_tetra!(du,u,p,t)
    du[1] = -p[1]*u[1]
    du[2] = p[1]*u[1] - p[2]*u[2]
end

function run_tetra(theta1,theta2,theta3)
    println(theta1,theta2,theta3)
    tdata = [1, 2, 3, 4, 6, 8, 10, 12, 16]
    ydata = [0.7, 1.2, 1.4, 1.4, 1.1, 0.8, 0.6, 0.5, 0.0]
    prob = ODEProblem(f_tetra!,[theta3,0.0],(0.0,tdata[end]),(theta1,theta2))
    sol = DifferentialEquations.solve(prob,saveat=tdata,save_start = false)
    return(sum(ydata-convert(Array,sol)[2,:]).^2)
end

modd = Model(solver=NLoptSolver(algorithm=:LD_MMA))
JuMP.register(modd, :run_tetra, 3, run_tetra, autodiff=true)

@variable(modd, x1 >= 1e-7)
@variable(modd, x2 >= 1e-7)
@variable(modd, x3 >= 1e-7)

@NLobjective(modd, Min, run_tetra(x1, x2, x3))
@constraint(modd, x1-x2 >= 1e-7)

println(modd)

setvalue(x1, 1.0)
setvalue(x2, 1.0)
setvalue(x3, 1.0)

status = JuMP.solve(modd)

println("got ", getobjectivevalue(modd), " at ", [getvalue(x1),getvalue(x2),getvalue(x3)])

Here I get a followed error:

Could someone please help me and show what I am doing wrong?

This is a v0.6 problem which has since been corrected on v1.0 so I would recommend upgrading when you can. But the issue is that, on v0.6, it was required that time was specified to be a dual number to propogate it effectively through the time stepping adaptivity. Thus you’d want:

prob = ODEProblem(f_tetra!,[theta3,0.0],(0.0,eltype(theta3)(tdata[end])),(theta1,theta2))

so that way when theta3 is a Dual number it will upconvert time to Duals. Again, on v1.0 this isn’t necessary.

And FWIW I just ran your code on v1.0 and it worked without changing anything.

got 1.400982598809288e-14 at [1.31674, 0.126062, 1.71748]

which seems like it would be how you generated the true values given how low the least squares is. Given the difference I know about the two, the change I proposed should make it work on v0.6, though I only tested on v1.0.

1 Like

Thanks Chris, now it is clearer.