Optimize not working with function that has find_zero

I have been using Julia on and off for over a year now. I know the basics but am not an expert by any means. I have had this problem many times in the past - some problem with forwardDiff not working because of floating point numbers. In this case I am using ‘optimize’ to minimize a function. Before I have had the same issue with NLsolve or other actions that use forwardDiff to calculate derivatives. I am pasting my code below so someone can help me on how to fix it

using Parameters,NLsolve,Optim,Roots;
@with_kw struct Params
β = 0.96; #discount factor
α = 0.33; #capital share
ψ = 0.05; #default penalty
δ = 0.1; #depreciation rate
ϕ = 0.5; #inverse frisch elasticity
χ = 1.7; #coefficient for disutility of labor
λ = 0.5; #fraction of firms’ capital seized on default
λstar = 0.4; #fraction of firm capital recovered by creditors
ηmin = 0.9; #min idiosyncratic shock
ηmax = 1.1; #max idiosyncratic shock
zGdef = 0.97; #productivity cost of default for govt
hdef = 0.97; #decrease in firm prod after default
G = 0.08; #constant govt expenditure
dbar = 0.02; #long run dividend payout target
ζ = 1; #friction in paying out dividend
end
@unpack β,α,ψ,δ,ϕ,χ,λ,λstar,ηmin,ηmax,zGdef,hdef,G,dbar,ζ = Params();

function ηbar(Params,lev,z,K,L::Real)
@unpack α,λ,δ,hdef = Params();
f(ηbar,L,K) = ((lev-ηbarλ(1-δ))/(ηbarαz*(1-(1-λ)hdef^(1/α))))^(1/(1-α))
((ηmax-ηbar)/(ηmax-ηmin)(ηbar+ηmax)/2 + (1-λ)(ηbar-ηmin)/(ηmax-ηmin)(ηbar+ηmin)/2hdef^(1/α)) - L/K;
g(ηbar) = f(ηbar,L,K);
return find_zero(g,.95);
end

function wage(z,K,lev,L,L_high,L_low,Params)
# z controls for govt default decision
@unpack α,hdef = Params();
if L>=L_high
w = (1-α)z(K/L)^α;
elseif L>L_low
η = ηbar(Params,lev,z,K,L);
w = (1-α)z((lev-ηλ(1-δ))/(ηαz*(1-(1-λ)hdef^(1/α))))^(α/(α-1));
else
w = (1-α)zhdef
((1-λ)*K/L)^α;
end
return w;
end

z,K,lev,L_high,L_low = 0.852,1.5,.4336,.1435,0.0;
negtaxrevenue1(L) = (-wage(z,K,lev,L,L_high,L_low,Params) + χ*L^ϕ)*L; #negative of tax revenue
result = optimize(x->negtaxrevenue1(first(x)), [0.35],Newton(); autodiff = :forward);

Please read Please read: make it easier to help you, particularly about formatting code so that it is readable.

In addition, say what the error you encountered was, and the behavior that you expected.

Could you please edit your post to make it a bit easier to read (and therefore easier to help you)? If you format your code with backticks then it will render properly, and if you can add the actual error message or incorrect result you’re getting, then we’ll be able to more easily figure out what’s wrong and how to fix it.

Edit: sorry for the double-post. I’m just echoing what @odow said.

Sorry to say it, but there’s a significant chance that you’re going to have to work around quite a few things. You’re going to run into some annoying things with nested forward differentiation that will be hard to work around.

What’s the exact error you get?

The whole code works fine except for the last line. When I try to minimize the function ‘negtaxrevenue1’, I get the error I just posted in my last post. Can someone tell me how to work around this issue?

My apologies. I didn’t know how to get the code to look like in Juno. Here it is

using Parameters,NLsolve,Optim,Roots;
@with_kw struct Params
    β = 0.96; #discount factor
    α = 0.33; #capital share
    ψ = 0.05; #default penalty
    δ = 0.1; #depreciation rate
    ϕ = 0.5; #inverse frisch elasticity
    χ = 1.7; #coefficient for disutility of labor
    λ = 0.5; #fraction of firms' capital seized on default
    λstar = 0.4; #fraction of firm capital recovered by creditors
    ηmin = 0.9; #min idiosyncratic shock
    ηmax = 1.1; #max idiosyncratic shock
    zGdef = 0.97; #productivity cost of default for govt
    hdef = 0.97; #decrease in firm prod after default
    G = 0.08; #constant govt expenditure
    dbar = 0.02; #long run dividend payout target
    ζ = 1; #friction in paying out dividend
end
@unpack β,α,ψ,δ,ϕ,χ,λ,λstar,ηmin,ηmax,zGdef,hdef,G,dbar,ζ = Params();

function ηbar(Params,lev,z,K,L::Real)
    @unpack α,λ,δ,hdef = Params();
    f(ηbar,L,K) = ((lev-ηbar*λ*(1-δ))/(ηbar*α*z*(1-(1-λ)*hdef^(1/α))))^(1/(1-α))*
    ((ηmax-ηbar)/(ηmax-ηmin)*(ηbar+ηmax)/2 + (1-λ)*(ηbar-ηmin)/(ηmax-ηmin)*(ηbar+ηmin)/2*hdef^(1/α)) - L/K;
    g(ηbar) = f(ηbar,L,K);
    return find_zero(g,.95);
end

function wage(z,K,lev,L,L_high,L_low,Params)
    # z controls for govt default decision
    @unpack α,hdef = Params();
    if L>=L_high
        w = (1-α)*z*(K/L)^α;
    elseif L>L_low
        η = ηbar(Params,lev,z,K,L);
        w = (1-α)*z*((lev-η*λ*(1-δ))/(η*α*z*(1-(1-λ)*hdef^(1/α))))^(α/(α-1));
    else
        w = (1-α)*z*hdef*((1-λ)*K/L)^α;
    end
    return w;
end

z,K,lev,L_high,L_low = 0.852,1.5,.4336,.1435,0.0;
negtaxrevenue1(L) = (-wage(z,K,lev,L,L_high,L_low,Params) + χ*L^ϕ)*L; #negative of tax revenue
result = optimize(x->negtaxrevenue1(first(x)), [0.35],Newton(); autodiff = :forward);

The error message I am getting is the following

MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{getfield(Main, Symbol("##34#35")),Float64},Float64,1})
Closest candidates are:
  Float64(::Real, !Matched::RoundingMode) where T<:AbstractFloat at rounding.jl:194
  Float64(::T<:Number) where T<:Number at boot.jl:741
  Float64(!Matched::Int8) at float.jl:60
  ...
in top-level scope at base\none
in  at base\none
in  at base\none
in #optimize#87 at Optim\Agd3B\src\multivariate\optimize\optimize.jl:33 
in optimize at Optim\Agd3B\src\multivariate\optimize\optimize.jl:57
in update_state! at Optim\Agd3B\src\multivariate\solvers\second_order\newton.jl:79
in perform_linesearch! at Optim\Agd3B\src\utilities\perform_linesearch.jl:40
in HagerZhang at LineSearches\WrsMD\src\hagerzhang.jl:101 
in  at LineSearches\WrsMD\src\hagerzhang.jl:140
in  at NLSolversBase\KG9Ie\src\interface.jl:69
in value_gradient!! at NLSolversBase\KG9Ie\src\interface.jl:82
in  at NLSolversBase\KG9Ie\src\objective_types\twicedifferentiable.jl:131
in gradient! at ForwardDiff\N0wMF\src\gradient.jl:33 
in gradient! at ForwardDiff\N0wMF\src\gradient.jl:35 
in vector_mode_gradient! at ForwardDiff\N0wMF\src\gradient.jl:103
in vector_mode_dual_eval at ForwardDiff\N0wMF\src\apiutils.jl:37 
in #34 at test.jl:45 
in negtaxrevenue1 at untitled:44
in wage at untitled:35
in ηbar at untitled:26
in find_zero at Roots\VcYzu\src\find_zero.jl:556 
in #find_zero#5 at Roots\VcYzu\src\find_zero.jl:556 
in find_zero at Roots\VcYzu\src\derivative_free.jl:120 
in #find_zero#34 at Roots\VcYzu\src\derivative_free.jl:123
in  at base\none
in #find_zero#4 at Roots\VcYzu\src\find_zero.jl:541
in find_zero at Roots\VcYzu\src\find_zero.jl:682
in update_state at Roots\VcYzu\src\derivative_free.jl:163
in setproperty! at base\sysimg.jl:19
in convert at base\number.jl:7

did you test the optimization without using a derivative method?

No I did not. I am checking how to do it now. I will post the result once I run optimize without derivatives

Try with autodiff=:finite instead of autodiff=:forward. It’s not going to be “real” AD though, but it will probably work.

1 Like

It worked! Thank you very much. Any idea about which is the fastest way to implement what I have done? I have to run this code >10^5 times and knowing the fastest way to do it would be very helpful

I don’t have time right now, but I will try to have a look tomorrow or in the weekend.

It won’t help you right now, but I have a WIP version of Optim that allows for scalars/StaticArray input and that would probably help performance. However, I have other commitments at the moment, so it’s progressing quite slowly.

Thanks a lot for your help now. If I need help with performance, I’ll ask again here later