# ForwardDiff and Floats

Hello,

I am trying to use Ipopt in JuMP to solve a minimization problem with automatic differentiation.

I am aware that I need to specify the types of the inputs as `Real` rather than `Float64` but it’s not clear to me to what extent I must do so because I still get this error message:

``````
MethodError: no method matching InvF(::Dict{String,Any}, ::Array{ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(objfunc)},Float64},Float64,3},3}, ::Array{ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(objfunc)},Float64},Float64,3},2}, ::Array{ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(objfunc)},Float64},Float64,3},3}, ::Array{ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(objfunc)},Float64},Float64,3},2}, ::Array{ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(objfunc)},Float64},Float64,3},3}, ::Array{ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(objfunc)},Float64},Float64,3},1})
``````

I hope that the lines below suffice for this, if not I am happy to provide more details:

The minimization problem

``````function objfunc(x1::Real,x2::Real,x3::Real)
return sum(SSresid(mod_pars,x1,x2,x3).^2);
end
model = Model(with_optimizer(Ipopt.Optimizer, max_iter=100, print_level=0));
#adding variables
@variable(model, 0.8 <= x1 <= 1.2);
@variable(model, 95 <= x2 <= 100);
@variable(model, 1e-6 <= x3 <= 0.01);
#register function
register(model, :objfunc, 3, objfunc, autodiff=true);
@NLobjective(model, Min, objfunc(x1,x2,x3));
@time xsol_jump = JuMP.optimize!(model);
``````

The functions called and the types specified:

``````function SSresid(mp::Dict,L::Real,K::Real,τE::Real)
#Step 1: solve HH problem
CUpol, CEpol, Us, Es, AUpol, AEpol = HHblock(mp,L,K,τE,false);
println(typeof(CUpol))
#Step 2: stationary distribution
DistE, DistU = InvF(mp,CEpol,CUpol,AEpol,AUpol,Es,Us);
#Step 3: clearing markets
res = MCblock(mp,DistE,DistU,CEpol,CUpol,L,K,τE);
return abs.(res);
end

function InvF(mp::Dict,CEpol::Array{Real}, CUpol::Array{Real},
AEpol::Array{Real}, AUpol::Array{Real}, Es::Array{Real},
Us::Array{Real}, tol=1e-6,maxiter=10000)
#initializing distributions (to have everyone with no assets)
DistE0 = zeros(Real,N,M,T); DistE0[1,1,1] = 0.92;
DistU0 = zeros(Real,N); DistU0[1] = 1-0.92;
DistE1 = copy(DistE0); DistU1 = copy(DistU0);
#iterating until convergence
itnum = 0; dist = Inf;
while dist>tol && itnum<maxiter
DistE1, DistU1 = updateDist(mp,DistE0, DistU0, CEpol, CUpol, AEpol, AUpol, Es, Us);
dist = max(maximum(abs.(DistE1.-DistE0)),maximum(abs.(DistU1.-DistU0)));
DistE0 = copy(DistE1);
DistU0 = copy(DistU1);
itnum +=1;
end
if itnum==1000
@warn "Reached maximum number of iterations, iteration did not converge"
end
return DistE1, DistU1;
end
``````

You can get rid of all of your type annotations. You don’t need them. Additionally, you will want to rewrite `DistE0` and `DistU0`, as you have written the entries are not concrete (check `isconcretetype(Real)` vs. `isconcretetype(Float64)`).

Also I’m not sure what `N`, `M`, and `T` are, possibly they are global variables? You should either pass them to `InvF` or calculate them from the matrices you already provided.

There are a few ways to do this. I’ve shown one way below:

``````function InvF(mp, CEpol, CUpol, AEpol, AUpol, Es, Us, tol=1e-6, maxiter=10000)
#initializing distributions (to have everyone with no assets)
DistE0 = zeros(eltype(CEpol),N,M,T); DistE0[1,1,1] = 0.92;
DistU0 = zeros(eltype(CEpol),N); DistU0[1] = 1-0.92;
``````

It will also run faster if you do `copyto!(DistE0, DistE1)` instead of `copy`. There are other performance gains you can eek out if you look through the https://docs.julialang.org/en/v1/manual/performance-tips/

3 Likes

The underlying issue is this:

``````julia> x = rand(3)
3-element Array{Float64,1}:
0.6017620020347454
0.09429667862972102
0.39870937669167517

julia> typeof(x) <: Array{Real}
false

julia> typeof(x) <: Array{<:Real}
true
``````

This prevents `InvF` being called with the arguments you expect. You want `CEpol::Array{<:Real}` instead of `CEpol::Array{Real}`.

Here is the relevant section of the documentation: https://docs.julialang.org/en/v1/manual/types/#Parametric-Composite-Types

1 Like

Can this issue be fixed by removing all type annotations?

But what if a particular array really needs to be integers because it is an array of indices which I use to extract some other matrix’s elements?

Nothing is stopping it from being integers. It’s just not necessary to specify it. If you feel the need to specify integers for whatever reason, I guess you can, but then that array should not be one of the optimization variables.

But when I try, I still get the mistake. This is the line where the error appears:

``````println("Type *** is ", typeof(convert(Array{Int64,1},xqi)))
``````

this is the error:

``````MethodError: no method matching Int64(::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(objfunc)},Float64},Float64,3})
``````

so I can’t really convert. Also, `xqi` is not the optimizing variable.

When you try what?

Sorry, ignore the previous message. I screwed up.

Though I resolved some intermediate issues, I am still having some problems. The code is as follows:

``````using JuMP, Ipopt;
function objfunc(x1,x2,x3)
return sum(SSresid(mod_pars,x1,x2,x3).^2);
end
model = Model(with_optimizer(Ipopt.Optimizer, max_iter=100, print_level=0));
#adding variables
@variable(model, 0.8 <= x1 <= 1.2);
@variable(model, 95 <= x2 <= 100);
@variable(model, 1e-6 <= x3 <= 0.01);
#register function
register(model, :objfunc, 3, objfunc, autodiff=true);
@NLobjective(model, Min, objfunc(x1,x2,x3));
@time xsol_jump = JuMP.optimize!(model);

function SSresid(mp,L,K,τE)
println("DANIELE is HERE")
#Step 1: solve HH problem
println("          caca: ----> ", typeof(L))
CUpol, CEpol, Us, Es, AUpol, AEpol = HHblock(mp,L,K,τE,false);
#Step 2: stationary distribution
DistE, DistU = InvF(mp,CEpol,CUpol,AEpol,AUpol,Es,Us);
#Step 3: clearing markets
res = MCblock(mp,DistE,DistU,CEpol,CUpol,L,K,τE);
return abs.(res);
end
``````

From what I can tell, the first time `SSresid` is run there are no problems and in particular, the type is `Float64`, the second time it’s `ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(objfunc)},Float64},Float64,3}` which then gives me an error when I run the `InvF` function (again, on the second call of `SSresid`).

Thoughts? Thank you!

Please provide a simplified reproducible example. I can’t run your code because I don’t know what things like `mod_pars` or `HHblock` are.

Your code needs to work regardless of what type `x1`, `x2`, or `x3` are. All you can assume is that the are a subtype of `Real`. The `no method matching Int64` error suggests you are trying to convert one of them to an integer. You can’t do this. A full (simplified) example will help people to offer more insight.

Thank you. Let me try this last question, if that fails I will try to put together a simple example (sorry the code I have is quite convoluted).

1st -> all variables that I derive using my optimization variables in some way or another will derive its type, in my case `ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(objfunc)},Float64},Float64,3})`?

2nd -> is there a way to convert such secondary variables to, say, Float64 within the code? Simple `convert` will not work.

Thank you!

@danicaratelli Have you found any solution to this problem? If yes, would you please share the solution for greater community learning? FYI, I’m having similar problems and have been duckduckgoing for days.

1 Like

Hello, no, I ended up using a different solver with numerical differentiation. I now use NLsolve.