Solving the 4 quadrants of dynamic optimization problems in Julia. Help Wanted!

Solving dynamic optimization problems is at the heart of economics (and many other fields).
The goal of this post is to explore various tools for solving dynamic optimization problems in the Julia Eco-system and to hopefully see if there are opportunities for synergy between the many different communities that solve these types of problems.

This post will consider a firm that chooses capital investment to maximize its value.
Here are the 4 quadrants of modeling:
image

We will study the case F(K)=zK, p_t =1, and C(I,K)=I + \frac{1}{2} I^2 because it has simple closed form solutions (for comparison).
Here is the corresponding closed form solution:
image

More detail for the deterministic cases.

Once a user (economist/engineer/etc) chooses a modeling framework (e.g. deterministic/discrete-time) there are often many different ways to solve the same problem.

Hence, a modeler has 3 steps

  1. Decide how to formulate the problem (e.g. Deterministic/Continuous-Time)
  2. Decide which mathematical problem solve:
    -Sometimes itā€™s easier to solve a system of three BV-DAEs
    -Sometimes itā€™s easier to solve a system of two first-order BV-ODEs
    -Sometimes itā€™s easier to solve one second-order BV-ODE
    -Sometimes itā€™s easier to solve the HJB-pde (in this case itā€™s an ODE)
    -Note: not all approaches involve solving equations (e.g. see InfiniteOpt.jl)
  3. Decide which method(s) to use to solve the mathematical problem specified above
    -E.g. there are many methods a modeler can use to solve a system of two first-order difference equations (perturbation, projection, and hybrids).

There are ENORMOUS trade-offs users need to make.
For example, perturbation methods (SolveDSGE.jl) are a powerful approach to solve large systems quickly, but generally work for models w/ well defined steady-states.
They also need to be extended to solve models w/ occasionally binding constraints (OccBin is not yet implemented in SolveDSGE.jl, see Julia examples here).

Dynamic programming approaches (such as VFI) can solve a very wide variety of problems, but they tend to be very-very slow. Currently, Julia does not have a generic equivalent to Matlabā€™s VFIToolkit. There was a Julia GSoC project that solved the optimal growth model w/ parallel VFI, but it was not generic.

Quadrant: Discrete-time/deterministic

Package/Sample Code Description Note
SolveDSGE.jl -Code link Solves difference equations w/ Perturbation/Projection Only for discrete time models w/ well defined steady-states @RJDennis
POMDPs.jl -Code link Solves Bellman equations w/ various methods discussion @zsunberg
JuMP.jl SDDP.jl Description Not sure it solves infinite horizon models. @ odow & co
Package Description Note

Quadrant: Discrete-time/Stochastic

Package/Sample Code Description Note
SolveDSGE.jl -Code link Solves stochastic difference equations w/ Perturbation/Projection Only for discrete time models w/ well defined steady-states @RJDennis
POMDPs.jl Solves Bellman equations w/ various methods Please post code below using e.g. a 2-state Markov Chain for z. @zsunberg
Package Description Note

Quadrant: Continuous -time/deterministic

Package/Sample Code Description Note
-Code link Solves HJB equations using Ben Mollā€™s method My custom solver. Works pretty nicely.
EconPDEs.jl -Code link Solves HJB equations @matthieu
InfiniteOpt.jl Solves Hamiltonian Only finite horizon currently. @pulsipher
SciML.jl Solve DEs TO DO. Only finite horizon currently. Try @MainBatDealer & @ptoche trick to solve a DE w/ an infinite horizon boundary constraint. @ChrisRackauckas
Flux.jl -Code link Solve HJB w/ NN @Goosee
Package Description Note

Quadrant: Continuous -time/Stochastic

Package/Sample Code Description Note
EconPDEs.jl Solves HJB equations To Do. @matthieu
Flux.jl Solves HJB equations w/ Deep Learning @Goosee
Package Description Note

There are many other dynamic programming methods (VFI/PFI/ECMā€¦) that currently donā€™t have generic/robust implementations in Julia.
@sglyon has 7+ methods
@floswald has others
Unfortunately @sglyonā€™s code is not generic like VFIToolkit.m

To Do:
check out QuantEcon.jl
solve w/ SDDP.jl|
solve 2-state Markov Chain version w/ POMDPs.jl
solve CTS-time w/ SciML.jl
solve stochastic w/ EconPDEs.jl
solve infinite horizon w/ InfiniteOpt.jl
check out Dynare.jl (not maintained?)
Dolo.jl
DifferentiableStateSpaceModels.jl
MFGNet.jl

Oren Levintal has a state-of-the-art global package in Matlab
Robert Kirkby has a phenomenal VFI package in Matlab

Please let me know what I missed!

29 Likes

Here is a discussion on how to solve the finite-horizon version of this problem w/ InfiniteOpt .jl
https://github.com/pulsipher/InfiniteOpt.jl/discussions/194

2 Likes

The infinite-horizon continuous-time problems with a saddle-point structure could be solved using the time reversal technique outlined here. At some point some more general technique should be put together though. :grinning:

Edit Adding some code for the simple deterministic saddle-path. As it turns out, the optimal trajectory features a constant level of investment (if I copied the equations above correctly); for this reason I have drawn the optimal trajectory with dashes, so we can see the black nullcline below it. Solving this problem is just like solving the Ramsey-Cass-Koopmans model (see the link above). Both models should be part of a general setup, as discussed below by Chris and Jesse.

using DifferentialEquations
using Plots

# Parameters
p = Dict(:Ī“ => 0.1, :Ļ => 0.05, :z => 0.3)

"""firm investment problem
   K: state variable, with given initial value K0
   I: control variable, free to jump"""
function inv!(du,u,p::Dict,t)
    Ī“, Ļ, z = p[:Ī“], p[:Ļ], p[:z]
    K, I = u
    du[1] = I - Ī“ * K
    du[2] = (Ļ + Ī“ - z) + (Ļ + Ī“)*I
    nothing
end

"""compute the stationary state"""
function ss(p::Dict)
    Ī“, Ļ, z = p[:Ī“], p[:Ļ], p[:z]
    Iss = z / (Ļ+Ī“) - 1
    Kss = Iss/Ī“
    return Dict(:Kss => Kss, :Iss => Iss)
end

# The stationary state
SS = ss(p)  # returns 10.0 and 1.0 for reference parameter values
Kss, Iss = ss(p)[:Kss], ss(p)[:Iss]

# event handler to prevent variables going out of bounds
function set_callback()
    c1(u, t, integrator) = u[1]  # stay in positive range
    c2(u, t, integrator) = u[2]  # stay in positive range
    c3(u, t, integrator) = 2*Kss - u[1]  # don't stray too far from stationary state
    c4(u, t, integrator) = 2*Iss - u[2]  # don't stray too far from stationary state
    function terminate_affect!(integrator)
        terminate!(integrator)
    end
    # set up affect upon callback
    cb1 = ContinuousCallback(c1, terminate_affect!)
    cb2 = ContinuousCallback(c2, terminate_affect!)
    cb3 = ContinuousCallback(c3, terminate_affect!)
    cb4 = ContinuousCallback(c4, terminate_affect!)
    cb = CallbackSet(cb1, cb2, cb3, cb4)
    return cb
end

cb = set_callback()

# Run time "backwards" | start below the stationary state
tspan = (0.0, -100.0)
tweak = 1.01  # I0 starts above Iss
u0 = [0.999999 * Kss; 0.999999 * tweak * Iss]
ode = ODEProblem(inv!, u0, tspan, p)
sol1 = solve(ode, abstol=1e-8, reltol=1e-8, callback=cb)

# Run time "backwards" | start above the stationary state
tspan = (0.0, -100.0)
tweak = 0.99 # I0 starts below Iss
u0 = [1.00001 * Kss; 1.00001 * tweak * Iss]
ode = ODEProblem(inv!, u0, tspan, p)
sol2 = solve(ode, abstol=1e-8, reltol=1e-8, callback=cb)

# initialize plot
plt = plot(title="Policy Function in the Neoclassical Investment Model", xlims=(0,2*Kss), ylims=(0,2*Iss), xlabel="K(t)", ylabel="I(t)", legend=false)

# plot nullclines
plot!(plt, x-> p[:Ī“] * x, 0, 2*Kss, color=:black, linewidth=3, legend=false)
plot!(plt, x-> Iss, 0, 2*Kss, color=:black, linewidth=3, legend=false)
plot!(plt, sol1, xflip=false, vars=(1,2), xlims=(0,2*Kss), ylims=(0,2*Iss), color=:blue, linewidth=2, linestyle=:dash)
plot!(plt, sol2, xflip=false, vars=(1,2), xlims=(0,2*Kss), ylims=(0,2*Iss), color=:blue, linewidth=2, linestyle=:dash)

# add decoration
scatter!((Kss,Iss), label=["Stationary State" "."], color=:red)

savefig(plt, "investment-saddle.png")


# make the streamplot
using CairoMakie

"""firm investment model equations as points in (K,I) space"""
function invEq(K, I, p)
    Ī“, Ļ, z = p[:Ī“], p[:Ļ], p[:z]
    dK = I - Ī“ * K
    dI = (Ļ + Ī“ - z) + (Ļ + Ī“)*I
    return Point(dK, dI)
end

let
    fig = Figure(fontsize = 20)
    ax = Axis(fig, xlabel="K", ylabel="I", title="Streamplot of Neoclassical Investment Model", backgroundcolor=:black)
    streamplot!(ax, (x,y)->invEq(x,y,p), 0..2*Kss, 0..2*Iss, colormap = Reverse(:plasma),
        gridsize= (32,32), arrow_size = 10)
    fig[1, 1] = ax
    display(fig)
    save("investment-streamplot-black.png", fig)
end

investment-saddle

1 Like

I find this interesting but the problem description is too economical (half-pun) for me to follow. Any chance that we can have more details on the mathematical problem, in particular the stochastic continuous quadrant?

1 Like

@mschauer
1: A firm begins period ā€œtā€ w/ capital K_t
2: it uses the capital to produce output according to technology F(K_t)=z\times K_t, where z is a productivity parameter (constant in deterministic models, stochastic in stochastic models)
3: \delta\times K_t of a firms capital depreciates after production (\delta is a constant depreciation parameter)
4: the firm invests (buys capital) I_t, at purchase price I_t and pays quadratic installation cost .5*I_t^2
5: the firm pays the owners a dividend D_{t} \equiv zK_{t} - I_t - \frac{1}{2} I_t^2
-The firm faces a trade-off when investing: lower dividend today vs higher dividend in the future
6: capital evolves according to the Law of Motion (LOM)
K_{t+1}=(1-\delta)K_{t} +I_{t} (discrete-time)
\dot{K}=I_t -\delta K_{t} (continuous time)
-the firm begins the next period w/ K_{t+1} or K_{t+dt}

I fully specified the deterministic (discrete time & continuous time) models & solved them in closed form above (& compare them w/ the various numerical solutions).

The only missing component for the stochastic models is the productivity process.
I omitted it bc I havenā€™t (yet) solved the stochastic model in closed form.
Itā€™s a bit of an art & science to specify a stochastic process that is interesting & tractable.
Here is code where I solve the stochastic-discrete time model w/ an AR-1 tech shock:
z_{t+1} = 0.5 + 0.5*z_{t} + Īµ_{t+1}

For the stochastic, continuous-time case, economists typically use a diffusion process, a Jump/Poisson process, or a Jump Diffusion.
I left that part to readers like you to specify your favorite process (maybe GBM?) that makes the problem not too hard to solve.

Warning: sometimes the process needs to be bounded for the solution to be well-defined
@matthieu sometimes uses a diffusion w/ reflecting boundaries that stays between [\underline{z},\bar{z}]

Researchers from economics/bio/physics/etc often use Distributions.jl when working w/ random variables.
The world would be a much better place if researchers from different fields came together to solve intertemporal optimization problems.

3 Likes

Note that this fills in a separate quadrant as well:

https://github.com/SciML/DifferenceEquations.jl

Thatā€™s stochastic discrete time, separating out the mathematical part from the DSGE modeling.

3 Likes

Got it!

1 Like

What @ChrisRackauckas is saying is important for all of these when we want to make composable packages (especially for AD). This cuts across some of the quadrants you have outlined.

What many people think of as a ā€œsolutionā€ is really just spitting out a new dynamic process (potentially stochastic) as an IVP. Given those solutions, it would be great if we all used the same software rather than having those orthogonal parts built directly into the control-theoretic solvers themselves.

DifferenceEquations (when it is ready) and all of the existing SDE/DE/etc. stuff already in SciML for continuous time fulfill those parts of the puzzle.

5 Likes

Exactly. Essentially, thereā€™s a few methods used here, along with their adjoints etc. to speedup over what Zygote would generate, so we want to implement it once in a package just about the mathematical problem and the solution process. We believe modeling packages should be kept separate from this, so say a DSGE modeling package should be separate from the solver, and reuse these solver components.

DiffEq has been used for a lot of this, where we have the jump process and the jump diffusion ones (along with the differential equations of course), but thereā€™s been a noticeable gap in the interface for discrete time stochastic models.

3 Likes

Any discrete time stochastic dynamical system and any static hierarchical PPL model can be described as composition of Markov kernels, I think that is the next ecosystem-integrating abstraction we should pin down.

1 Like

In the same spirit as this post, @sdwfrost, solves the SIR model in all 4 quadrants here:
image
A few observations:

  1. @sdwfrost solves this model 26+ ways, doing a great job of illustrating the Julia ecosystem. Bravo!
  2. These models are generally models are generally initial value problems & there (currently) exist more packages & other resources for solving these types of problems.
    Problems that economists are interested in tend to be boundary value problems (often w/ infinite horizon terminal conditions) which are less easy to solve w/ some ā€œstandard solversā€ not designed for econ.
    (Note: Iā€™ve had some success w/ Mathematica, POMDPs.jl. I donā€™t see why discrete time Dynamic Programming solvers canā€™t be used for econ problemsā€¦)
  3. The standard SIR model has no closed form solution.
    My motto is ā€œtrust by verifyā€, hence I strongly prefer packages verify their solvers work w/ as many closed form solutions as possible. (This is why I focus on econ models w/ known closed form solutions here.)
    A slightly modified SIR does have a closed form solution.

Thanks to @ChrisRackauckas for the link.

1 Like

Your recent post made me remember about this thread. This book was recently released and it seems to address all possible solution approaches to discrete dynamic problems:
https://www.wiley.com/en-au/Reinforcement+Learning+and+Stochastic+Optimization%3A+A+Unified+Framework+for+Sequential+Decisions-p-9781119815044

The description says several things that could be potentially useful for economics:

Reinforcement Learning and Stochastic Optimization offers a single canonical framework that can model any sequential decision problem using five core components: state variables, decision variables, exogenous information variables, transition function, and objective function. This book highlights twelve types of uncertainty that might enter any model and pulls together the diverse set of methods for making decisions, known as policies, into four fundamental classes that span every method suggested in the academic literature or used in practice.

Reinforcement Learning and Stochastic Optimization is the first book to provide a balanced treatment of the different methods for modeling and solving sequential decision problems, following the style used by most books on machine learning, optimization, and simulation.

Could it be useful for economists?

2 Likes

Currently the link to the book is broken: Here is an updated link

2 Likes

I have written the package SpeedMapping.jl to accelerate the convergence of fixed-point mappings. You may want to try it to speed up applications such as VFIs. Feel free to contact me for details.

1 Like

@nicolas It would be greeat if this sort of thing could be built into the sciml wrapper ecosystem - if it isnā€™t already. Then it could be easily benchmarked reatlive to anderson acceleration/etc. @ChrisRackauckas might have other plans for fixed point agorithms/noninear solvers but I think that NonlinearSolve.jl: High-Performance Unified Nonlinear Solvers Ā· NonlinearSolve.jl might be the plan?

Also, the wrappers implement (or will eventually) forward and reverse-mode AD rules, so you get differentiable programming for free.

2 Likes

@nicolas Iā€™m very curious to apply your package. Can you please help me.
Letā€™s consider the discrete-time, deterministic example.
Here is my boring/slow VFI code:

Ī“=0.10; r=0.15; z= r + Ī“ + .02;
u(K,I;z=z)= z*K -I -(.5)*(I^(2.0))
#
K_ss = (z - (r+Ī“))/(Ī“*(r+Ī“))
D_ss = u(K_ss, Ī“*K_ss)
V_ss = ((1+r)/r)*D_ss
Q_ss = (1+r)*(1+Ī“*K_ss)
V_cf(k) = (V_ss - Q_ss*K_ss) + Q_ss*k
Grid_K = 0.01*K_ss : .01 : 3.00*K_ss
#
using Interpolations, Plots, LinearAlgebra
approx(gr, val0) = LinearInterpolation(gr, val0, extrapolation_bc = Line())
struct MDP Ī³; S; A; T; R end
RHS(P,V,s,a) = P.R(s,a) + P.Ī³*V(T(s,a))
#
Ī³         = (1.0 / (1.0 + r))
S         = [Grid_K;]
A(s;Ī“=Ī“)  = [range( -(1.0 -Ī“)*s +.01, maximum(S), length=length(S));]  
T(s,a)    = (1.0 -Ī“)*s + a
R(s,a)    = u(s,a)
P1 = MDP(Ī³, S, A, T, R)
V0 = [u(s, Ī“*s) for s in P1.S]
#
function MDP_VFI(P::MDP, k_max::Int64, V)
    pol = similar(V)
    for k = 1:k_max 
        vĢ‚ = approx(P.S, V)
        rhs = [ [RHS(P,vĢ‚,s,a)  for a āˆˆ P.A(s)] for s āˆˆ P.S ]
        rhs = hcat(rhs...)
        Vp   = maximum(rhs, dims=1) |> vec
        V = Vp 
        #plot!(Vp) |> display
        #pol = (k==k_max) ? argmax(rhs, dims=1) |> vec : nothing 
    end
    return V  #, pol
end
V = MDP_VFI(P1, 500, V0)

plot(S, V)
plot!(S, V_cf.(S))

Start with a guess V_0(k) over Grid_K .
Update rule:
V_{t+1}(K) =\underset{I}{\max} \{ u(K, I) +\frac{1}{1+r}V_{t}( I +(1-\delta)K ) \}
Stop when: d(V_{t+1}(K), V_{t}(K))<\varepsilon

How do I solve VFI problems w/ your package?!?!

@Albert_Zevelev I have modified your function to slightly reduce memory allocation (just to make the speed comparison as accurate as possible) and make the loop stop sooner (when norm(V - Vp) < 1e-10).

To use the package, just write a map V ā†’ Vp and feed it to speedmapping with a starting point. On my computer, I get 3.3s running the accelerated loop and 1.7s using SpeedMapping, a rather modest improvement. On top of my head, I would say your problem converges relatively quickly already. Perhaps also the discretization (taking the maximum over array elements) may reduce the accuracy of the extrapolation compared to an ideal case with an infinite number of support points.

You also mentioned Anderson Acceleration. I have adapted to Julia an improved version of AA based on a paper by Henderson and Varadhan (2019) which I also tested out of curiosity. The speed is the same as SpeedMapping so either algorithm would probably do.

Here is the code:

Ī“ = 0.10;
r = 0.15;
z = r + Ī“ + 0.02;
u(K, I; z=z) = z * K - I - (0.5) * (I^(2.0))
#
K_ss = (z - (r + Ī“)) / (Ī“ * (r + Ī“))
D_ss = u(K_ss, Ī“ * K_ss)
V_ss = ((1 + r) / r) * D_ss
Q_ss = (1 + r) * (1 + Ī“ * K_ss)
V_cf(k) = (V_ss - Q_ss * K_ss) + Q_ss * k
Grid_K = 0.01*K_ss:0.01:3.00*K_ss
#
using Interpolations, Plots, LinearAlgebra
approx(gr, val0) = LinearInterpolation(gr, val0, extrapolation_bc=Line())
struct MDP
    Ī³
    S
    A
    T
    R
end
RHS(P, V, s, a) = P.R(s, a) + P.Ī³ * V(T(s, a))
#
Ī³ = (1.0 / (1.0 + r))
S = [Grid_K;]
A(s; Ī“=Ī“) = [range(-(1.0 - Ī“) * s + 0.01, maximum(S), length=length(S));]
T(s, a) = (1.0 - Ī“) * s + a
R(s, a) = u(s, a)
P1 = MDP(Ī³, S, A, T, R)
V0 = [u(s, Ī“ * s) for s in P1.S]

function MDP_VFI(P::MDP, k_max::Int64, Vin; tol=1e-10)
    #    pol = similar(V)
    V = copy(Vin)
    Vp = similar(V)
    for k = 1:k_max
        vĢ‚ = approx(P.S, V)
        rhs = [[RHS(P, vĢ‚, s, a) for a āˆˆ P.A(s)] for s āˆˆ P.S]
        rhs = hcat(rhs...)
        Vp .= maximum(rhs, dims=1) |> vec
        if norm(V - Vp) < tol
            break
        end
        V .= Vp
        #plot!(Vp) |> display
        #pol = (k==k_max) ? argmax(rhs, dims=1) |> vec : nothing 
    end
    return V  #, pol
end
V_VFI = MDP_VFI(P1, 500, V0)

plot(S, V)
plot!(S, V_cf.(S))

# To use SpeedMapping, write the loop as a map
function map!(Vout, Vin, P)
    vĢ‚ = approx(P.S, Vin)
    rhs = [[RHS(P, vĢ‚, s, a) for a āˆˆ P.A(s)] for s āˆˆ P.S]
    rhs = hcat(rhs...)
    Vout .= maximum(rhs, dims=1) |> vec
end

using SpeedMapping
V_SM = speedmapping(V0; (m!)=(Vout, Vin) -> map!(Vout, Vin, P1), tol=1e-10).minimizer

# Solutions are similar
norm(V_VFI - V_SM)

using BenchmarkTools
@benchmark V = MDP_VFI(P1, 500, V0)
@benchmark speedmapping(V0; (m!)=(Vout, Vin) -> map!(Vout, Vin, P1), tol=1e-10)
1 Like

Thanks! It works well! The bigger the grid, the more advantage to using your package.

  1. Both solvers still could be much faster
    My guess is I could use a faster approx() among other thingsā€¦
  2. Would it be hard to generalize this solver to n-dimensions?
    (similar ā€œgenericā€ VFI solvers like VFIToolkit.m have been floating around, but nothing in Julia)
    I have some code implementing other algorithms including 7+ methods ā€¦

Sure. It does work with n dimensions already, as well as other number types like Float32 or BigFloat (on Github, I also have a new version for Complex numbers, but Iā€™m waiting a bit to release it officially in case bugs are found). If you have some application that doesnā€™t work, just ask and Iā€™ll see what I can do.

2 Likes

Update:
I was able to formulate this as a Linear Quadratic Regulator optimal control problem & solved it w/ @andreasvargaā€™s amazing package MatrixEquations.jl.

I did the infinite horizon, discrete time, deterministic version.
LQR/LQG can be used in all 4 quadrants for both finite/infinite horizon problems!

Formulation (note: there is more than 1 way to formulate this problem as an LQR. be careful!)

Code:

using MatrixEquations, LinearAlgebra; 
Ī“=0.1; r=0.05; z= r + Ī“ +.02; Ī²=1.0/(1.0 + r); # parameters

I_SS = (z-r-Ī“)/(r+Ī“);
K_SS = I_SS/Ī“;
D_SS = z*K_SS - I_SS - 0.5*(I_SS)^(2.0);
V_SS = ((1.0+r)/r)*D_SS;
Q_SS = (1+r)*(1+I_SS);

P_sol = [(V_SS - Q_SS*K_SS) 0.0; Q_SS   0.0]; # closed form P. Value = x'*P*x
F_sol = [-I_SS   0.0];                        # closed form u. Policy = -F_sol*x

A=[1.0 0.0; 0.0  (1.0 - Ī“)];
B=[0.0; 1.0];
Q=[0.0 0.0; z 0.0];
R=[-0.5];
S=[-1.0; 0.0];

S = 0.5*S; # my Riccatti is different from MatrixEquations.jl
#Q = -1.0*Q; R = -1.0*R; S = -1.0*S;    # max to min. DO NOT NEED!
A = sqrt(Ī²)*A; B = sqrt(Ī²)*B; # scale by discount factor
Q = (Q + Q') ./ 2.0; R = (R + R') ./ 2.0;  P_sol = (P_sol + P_sol') ./ 2.0; # make symmetric  
                       
P, CLSEIG, F = ared(A,B,R,Q,S);             # solve for numerical value function = x'*P*x

# Compare Value function P. CORRECT!
julia> P
2Ɨ2 Matrix{Float64}:
 0.186667  0.595
 0.595     0.0

julia> P_sol
2Ɨ2 Matrix{Float64}:
 0.186667  0.595
 0.595     0.0

# Compare Policy function F. u = -F*x. CORRECT!
julia> F
1Ɨ2 Matrix{Float64}:
 -0.133333  -0.0

julia> F_sol
1Ɨ2 Matrix{Float64}:
 -0.133333  0.0

Fn  = pinv(R .+ (B')*P*B) * ((B')*P*A +S'); # numerical policy = -F_n*x. SAME as F from ared().
julia> Fn
1Ɨ2 Matrix{Float64}:
 -0.133333  0.0

LQR problems that are used regularly by engineers in robotics, aeronautics & other fields are also common in economics.
The world would be a much better place if we coordinated on (one or few) high quality Julia implementations, where engineers/economists/nurses/plumbers all submitted bug reports, feature requests, pull requestsā€¦

2 Likes