Eating the cake (nat res depletion model) with endogenous time horizon in InfiniteOpt

Hello, I am trying to numerically solve the dynamic problem of optimal natural resource depletion (aka “eating the cake”) with InfiniteOpt.jl.

The script below works fine, but for the time horizon (i.e. the upper bound of the domain of the infinite dimension “time”) I need to specify it from the analytical resolution that I already have.
Is there a way to let the time horizon “free”, i.e. let it to be endogenously retrieved in the optimisation ?

[Side question] Is is possible to use an objective version that is itself an integral computed with quadgk (like welfare_auto ?). I am having errors with it, so I had to compute it manually…

# "Eating the cake" model of depletion of natural resources

cd(@__DIR__)
using Pkg
Pkg.activate(".")
# Pkg.add(["QuadGK", "InfiniteOpt", "Ipopt", "Plots"]) # only once to install packages
using QuadGK, InfiniteOpt, Ipopt, Plots

# Parameters
k=100; a = 10 # Parameters of the demand function
S₀ = 1000      # Initial stock
ρ  = 0.01      # Discount rate 
Sτ = 0         # Final stocks 
test = sqrt(2*S₀*a/ρ) # this comes from the analytical resolution that I already have !! 
T  = test       # time horizon


# Demand function at each point in time
demand_price(r;k=k,a=a) = k*exp(-a*r)

# OBJECTIVE FUNCTION
# Welfare (utility) at each point in time  
welfare_auto(r;k=k,a=a) = quadgk(q -> demand_price(q,k=k,a=a), 0, r, rtol=1e-4)[1] # quadgk first output is the value of the integral, second value is an estimated error
welfare_manual(r;k=k,a=a) = - k*(exp(-a*r) - 1) / a

discount(t; ρ=ρ) = exp(-ρ*t) # discount function

# Equation of motion of the state variable
var_stocks(r) = -r

opt = Ipopt.Optimizer   # desired solver
ns = 1_000;             # number of points in the time grid

m = InfiniteModel(opt)
@infinite_parameter(m, t in [0, T], num_supports = ns)

@variable(m, S, Infinite(t)) ## state variables
@variable(m, r, Infinite(t)) ## control variables

@objective(m, Max, integral(welfare_manual(r), t, weight_func = discount))

@constraint(m, S(0) == S₀)
@constraint(m, S(test) == Sτ)
@constraint(m, r >= 0)
@constraint(m, c1, deriv(S, t) == var_stocks(r))

optimize!(m)
termination_status(m)


r_opt = value(r)
S_opt = value(S)
ts = supports(t)
opt_obj = objective_value(m) 

ix = 2:(length(ts)-1) # index for plotting
plot(ts[ix],  S_opt[ix], lab = "S: resource stock")
plot(ts[ix],  r_opt[ix], lab = "r: resource extraction")
plot(ts[ix],  demand_price.(r_opt[ix]), lab = "p: price")

# For checking...
price_manual(t;k=k,ρ=ρ,T=T) = k*exp(ρ*(t-T))
plot!(ts[ix],  price_manual.(ts[ix]), lab = "p: price (manual") 

I just realised I can just leave T to a “high enough” number…
For the integrate, I tried to use a simple manual quadrature (function quad_trap), but the solver seems slow and it finally diverges…

# "Eating the cake" model of depletion of natural resources

cd(@__DIR__)
using Pkg
Pkg.activate(".")
# Pkg.add(["QuadGK", "InfiniteOpt", "Ipopt", "Plots"]) # only once to install packages
using QuadGK, InfiniteOpt, Ipopt, Plots

# Parameters
k=100; a = 10 # Parameters of the demand function
S₀ = 1000      # Initial stock
ρ  = 0.01      # Discount rate 
Sτ = 0         # Final stocks 
test = sqrt(2*S₀*a/ρ) # this comes from the analytical resolution that I already have !! 
T  = 2000       # time horizon


# Demand function at each point in time
demand_price(r;k=k,a=a) = k*exp(-a*r)

function quad_trap(f,a,b,N) 
    h = (b-a)/N
    int = h * ( f(a) + f(b) ) / 2
    for k=1:N-1
        xk = (b-a) * k/N + a
        int = int + h*f(xk)
    end
    return int
end

# OBJECTIVE FUNCTION
# Welfare (utility) at each point in time  
welfare_auto(r;k=k,a=a) = quadgk(q -> demand_price(q,k=k,a=a), 0, r, rtol=1e-4)[1] # quadgk first output is the value of the integral, second value is an estimated error
welfare_auto2(r;k=k,a=a) = quad_trap(q -> demand_price(q,k=k,a=a), 0, r,3) 
welfare_manual(r;k=k,a=a) = - k*(exp(-a*r) - 1) / a

welfare_manual(0.05)
welfare_auto(0.05)
welfare_auto2(0.05)

discount(t; ρ=ρ) = exp(-ρ*t) # discount function

# Equation of motion of the state variable
var_stocks(r) = -r

opt = Ipopt.Optimizer   # desired solver
ns = 1000;             # number of points in the time grid

m = InfiniteModel(opt)
@infinite_parameter(m, t in [0, T], num_supports = ns)

@variable(m, S, Infinite(t)) ## state variables
@variable(m, r, Infinite(t)) ## control variables

@objective(m, Max, integral(welfare_manual(r), t, weight_func = discount))

@constraint(m, S(0) == S₀)
@constraint(m, S(T) == Sτ)
@constraint(m, r >= 0)
@constraint(m, c1, deriv(S, t) == var_stocks(r))

optimize!(m)
termination_status(m)


r_opt = value(r)
S_opt = value(S)
ts = supports(t)
opt_obj = objective_value(m) 

ix = 2:(length(ts)-1) # index for plotting
plot(ts[ix],  S_opt[ix], lab = "S: resource stock")
plot(ts[ix],  r_opt[ix], lab = "r: resource extraction")
plot(ts[ix],  demand_price.(r_opt[ix]), lab = "p: price")


# For checking..
price_manual(t;k=k,ρ=ρ,T=test) = min(k,k*exp(ρ*(t-T)))
plot!(ts[ix],  price_manual.(ts[ix]), lab = "p: price (manual") 
2 Likes

This is question for @pulsipher

2 Likes

Hi @sylvaticus. The short answer is that while InfiniteOpt can represent optimal control problems with infinite time horizons, there is not much in the way of infrastructure behind the scenes to readily support them (see Add Transformation Support for Domains with Infinite Bounds · Issue #190 · infiniteopt/InfiniteOpt.jl · GitHub and the references forum discussions); contributions welcome :slight_smile:

With regard to using the functions from quadgk, the reason that these fail is that they do not return analytical expressions that JuMP expects (e.g., sin, exp, +, etc.). One way around this would be to register the function (assuming it is amenable to auto-differentiation); see Expressions · InfiniteOpt.jl. In this case we could add:

@register(m, welfare_auto(r))

And then use welfare_auto when defining objectives/constraints (note that keyword arguments would not be supported however). Also, again I haven’t tested this code and I am not sure whether the methods from QuadGK are autodifferentiable.

Finally, with regard to having trouble finding solutions for certain formulations (especially with very long time horizons), this is not surprising. Here, providing a good initialization for the variables can make a big difference in helping Ipopt to converge.

2 Likes

Got it.
I went using a script that implements the Legendre-Gauss quadrature and I can get it differentiating and solving the model altought a bit slower than using directly the analytical version of the integration (code below).
But in general for dynamic optimisation the variable is a function, not a scalar… how can I initialize a variable to a function (or a vector) rather than to a scalar ?

Eating the cake model with auto integration of the welfare function
# "Eating the cake" model of depletion of natural resources

cd(@__DIR__)
using Pkg
Pkg.activate(".")
# Pkg.add(["InfiniteOpt", "Ipopt", "Plots"]) # only once to install packages
using InfiniteOpt
using Ipopt, Plots

# Parameters
k=100; a = 10 # Parameters of the demand function
S₀ = 1000      # Initial stock
ρ  = 0.01      # Discount rate 
Sτ = 0         # Final stocks 
test = sqrt(2*S₀*a/ρ) # this comes from the analytical resolution that I already have !! 
T  = 2000       # time horizon

# Demand function at each point in time
demand_price(r;k=k,a=a) = k*exp(-a*r)

# Utility function to compute the welfare
"""
    lgquadrature_weights(n_points,lb,ub)

Returns the "weights" needed to compute the integral value of a function between lb and up   
"""
function lgquadrature_weights(N0::Integer,a::Real=-1,b::Real=1)
    # This script is for computing definite integrals using Legendre-Gauss 
    # Quadrature. Computes the Legendre-Gauss nodes and weights on an interval
    # [a,b] with truncation order N
    #
    # Suppose you have a continuous function f(x) which is defined on [a,b]
    # which you can evaluate at any x in [a,b]. Simply evaluate it at all of
    # the values contained in the x vector to obtain a vector f. Then compute
    # the definite integral using sum(f.*w);
    #
    # Written by Greg von Winckel - 02/25/2004
    # First adapted to Julia by Tyler Ransom on 08-04-2015
    # Updated on 08-17-2020
    N  = N0-1
    N1 = N+1
    N2 = N+2
    xu = range(-1,stop=1,length=N1)    
    # Initial guess
    y = cos.((2*(0:N) .+ 1)*pi/(2*N .+ 2))  .+  ( 0.27/N1 ) .* sin.( pi .* xu .* N/N2 )
    # Legendre-Gauss Vandermonde Matrix
    L  = zeros(N1,N2)
    # Derivative of LGVM
    Lp = zeros(N1,N2)
    # Compute the zeros of the N+1 Legendre Polynomial
    # using the recursion relation and the Newton-Raphson method
    y0 = 2
    vareps = 2e-52
    # Iterate until new points are uniformly within epsilon of old points
    i = 0
    tracker=0
    it_max=10
    while (norm(y.-y0,Inf)>vareps && tracker<=it_max)
        d=norm(y.-y0,Inf)    
        L[:,1]  .= 1
        Lp[:,1] .= 0
        L[:,2] .= y
        for k=2:N1
            L[:,k+1] = ( (2*k-1)*y .* L[:,k] .- (k-1)*L[:,k-1] )/k
        end
        Lp = (N2)*( L[:,N1] .- y .* L[:,N2] )./(1 .- y.^2)
        y0 = y
        y  = y0 - L[:,N2]./Lp
        if norm(y.-y0,Inf)==d
            tracker+=1
        end
        i+=1
    end
    # Linear map from[-1,1] to [a,b]
    x = (a.*(1 .- y) .+ b .* (1 .+ y))./2
    # Compute the weights
    w=(b-a)./((1 .- y.^2).*Lp.^2)*(N2/N1)^2
    return x,w
end

# OBJECTIVE FUNCTION
# Welfare (utility) at each point in time  
welfare_manual(r;k=k,a=a) = - k*(exp(-a*r) - 1) / a
welfare_auto(r;k=k,a=a) = begin x, w = lgwt(10,0.0,r); return dot(w, demand_price.(x)) end

# to test them:
welfare_manual(0.05) ≈ welfare_auto(0.05)

discount(t; ρ=ρ) = exp(-ρ*t) # discount function

# Equation of motion of the state variable
var_stocks(r) = -r

opt = Ipopt.Optimizer   # desired solver
ns = 1000;             # number of points in the time grid

m = InfiniteModel(opt)
@register(m, welfare_auto(r))
@infinite_parameter(m, t in [0, T], num_supports = ns)

@variable(m, S, Infinite(t), start=S₀) ## state variables
@variable(m, r, Infinite(t), start=0.0) ## control variables

@objective(m, Max, integral(welfare_auto(r), t, weight_func = discount))

@constraint(m, S(0) == S₀)
@constraint(m, S(T) == Sτ)
@constraint(m, r >= 0)
@constraint(m, c1, deriv(S, t) == var_stocks(r))

optimize!(m)
termination_status(m)


r_opt = value(r)
S_opt = value(S)
ts = supports(t)
opt_obj = objective_value(m) 

ix = 2:(length(ts)-1) # index for plotting
plot(ts[ix],  S_opt[ix], lab = "S: resource stock")
plot(ts[ix],  r_opt[ix], lab = "r: resource extraction")
plot(ts[ix],  demand_price.(r_opt[ix]), lab = "p: price")

# For checking..
price_manual(t;k=k,ρ=ρ,T=test) = min(k,k*exp(ρ*(t-T)))
plot!(ts[ix],  price_manual.(ts[ix]), lab = "p: price (manual") # to check

Yep, I would expect the analytical version to outperform the one that is autodifferentiated (there is far less to compute). In general, when using JuMP or InfiniteOpt, one should only register if absolutely necessary.

To initialize the variables, InfiniteOpt accepts scalars or functions with the appropriate number of inputs (see Variables · InfiniteOpt.jl). I provided a more interesting example here, notice how I specify guess trajectory functions for the states via guess_xs.

2 Likes