How to resolve type ambiguity with Optim results?

I am optimizing a function inside another function. I need to obtain and save the results of that optimization. However the compiler does not infer the type of those results. I am obtaining these instances of Any in the @code_warntype of the function:

30 ─ %66  = invoke Optim.:(#optimize#71)(%24::Float64, 2.220446049250313e-16::Float64, 100000::Int64, %43::Bool, %49::Bool, nothing::Nothing, %50::Int64, %46::Bool, %51::typeof(optimize), %23::getfield(Main, Symbol("##11#12")){Int64,Float64}, 2.220446049250313e-16::Float64, _3::Float64, $(QuoteNode(Brent()))::Brent)::Optim.UnivariateOptimizationResults{Float64,Float64,_1,_2,Float64,Brent} where _2 where _1
││││        └───        goto #31
│││         31 ─        goto #32
││          32 ─        goto #33
│╻╷       minimum9  33 ─ %70  = (Base.getfield)(%66, :minimum)::Any
│           │           (Core.typeassert)(%70, Main.Float64)
│           │    %72  = π (%70, Float64)
│           │    %73  = Main.V2dict::Core.Compiler.Const(Dict{Tuple{Int64,Float64},Float64}(), false)
│╻        setindex!   │    %74  = (Base.tuple)(t, K)::Tuple{Int64,Float64}
││          │           invoke Base.setindex!(%73::Dict{Tuple{Int64,Float64},Float64}, %72::Float64, %74::Tuple{Int64,Float64})
│╻╷       minimizer10 │    %76  = (Base.getfield)(%66, :minimizer)::Any

How can I resolve those type ambiguities?

If you must know, this is the function I am trying to speed up

using Optim
const V2dict = Dict{Tuple{Int, Float64}, Float64}()
const c2dict = Dict{Tuple{Int, Float64}, Float64}()
const k′2dict = Dict{Tuple{Int, Float64}, Float64}()

function V2(t::Int, K::Float64)
    if t >= T
        return 0.0
    else
        if haskey(V2dict, (t, K))
            return V2dict[t, K]::Float64
        else
            opt = optimize(K′ -> -(log(K - K′) + β * V2(t+1, K′)), eps(), K, iterations = 100_000)
            V2dict[t, K] = (Optim.minimum(opt))::Float64
            k′2dict[t, K] = (Optim.minimizer(opt))::Float64
            c2dict[t, K] = (K - k′2dict[t, K])::Float64
            conv2[t, K] = Optim.converged(opt)::Bool
            return V2dict[t, K]::Float64
        end
    end
end

I ran into this multiple times before for multivariate optimization, and I couldn’t figure out what was going on, it is a really peculiar type instability. I remember bringing this up to @pkofod in another issue before, but I don’t think anyone followed up on it.

1 Like

Use a function barrier after optimize.

1 Like

This will likely not cause any real performance issues, but it’s something I want to resolve eventually, my time budget is quite tight, though.

However, let me just comment on the economics of the matter. I see that you’re trying to solve a standard growth model, and that you’ve learnt to successively divide the problems into a sequence of two-period problems (or static problems if you just consider the value tomorrow as “some function”). However, if you are actually trying to use dynamic programming (which you should), then you’re not currently doing that.

Every time you want to solve for a capital level at t = 0, you’re going to go through all the optimization steps from t=0 to t=T. In your problem it won’t be too extreme, because you have no stochastic elements in your state transitions, but once that happens, you’ll be branching out into an exponentially growing number of optimization problems. Many of these will be solved on capital levels that are very close to each other within the same period → waste of resources.

You need to solve the last period first. Save the values at a grid over K, and construct an interpolant (say a gridded interpolant from Interpolations.jl with a “Linear” for a piecewise linear interpolant to preserve monotonicity). Then you go back to the period before that and optimize over current profits and discounted future value by evaluating the interpolant you just created, not by successively solving forward in time.

Let me apologize in the end for assuming that you’re not actually doing this for a specific reason that is just unclear from your question. My point is just, that while we all love to get maximum speed out of our code, some dynamic dispatch is no where near as important as choosing the right algorithm in a majority of cases.

2 Likes

Hey, @pkofod you got it right. I am learning to program these problems. I see what you are saying now, with uncertainty over K, I will have to solve very similar problems many times.

What you are saying, if I understand correctly is to solve the last period for all possible values of K, create an interpolant, and then solve backwards using the interpolant instead of the actual V. Is that it? Wouldn’t this work only for a particular class of problems in which V follows some form of stationarity? As I understand, finite horizon DP problems are not stationary, since we have to keep track of t.

FWIW, the type instability disappears when I first set values for T and β before @code_warntype like so:

const T = 45
const β = 0.95

You might find som of the resources in QuantEcon · QuantEcon.jl
useful. I believe there are several other packages for solving financial DP problems as well (such as stochastic DP) if you would like a ready-made package.

1 Like