Solving the 4 quadrants of dynamic optimization problems in Julia. Help Wanted!

It turns out the firm investment model used in the example above can be solved very precisely in all 4 quadrants using e.g. MatrixEquations.jl.
Hat tip to @baggepinnen & @andreasvarga for their help
Here are infinite horizon versions of all 4 quadrants:


Here is the math to formulate the firm investment problem as LQ/LQG:

I’ll post the code later…

  1. Above I solve infinite horizon versions in all 4 quadrants.
    MatrixEquations.jl currently does not solve finite horizon versions (that I could find…)
    QuantEcon.jl solves the finite horizon version, only in discrete time (not continuous time)
    Would be nice if the Riccatti solvers in both packages were benchmarked/compared…
  2. Symbolics.jl works insanely well (at least for what I needed so far).
    Though it took me a lot of time to figure out how to use it. I’m sure the docs will improve…
1 Like

Sorry for interferring in this discusssion, but I have the feeling that the function ared available in MatrixEquations is improperly used in this example. My background is control engineering and the points discussed below reflect my understanding of the formulation of LQG problems in control theory.

  1. For a given pair (A,B) (I will discuss its properties at point 2), ared essentially computes an “optimal” stabilizing state feedback F such that the eigenvalues of A-B*F are stable (i.e., have moduli less than one for a discrete-time setting). For this, it is necessary that the pair (A,B) is stabilizable (i.e., such a feedback must exist). Moreover, it is standard when solving LQG problems to choose the control and state weighting matrices R and Q positive semidefinite. For this setting, arec computes an “optimal” stabilizing solution as expected.

  2. In this example the pair (A,B) is not stabilizable, because the unstable eigenvalue in 1 remains fixed for any feedback. Moreover, R is negative definite and Q is indefinite (just a symmetric matrix with both positive and negative eigenvalues). With this setting, ared will guarantedly fail.

  3. By replacing (A,B) with the stabilizable pair (sqrt(β)*A,sqrt(β)*B), and with the above choice of R and Q, a “solution” F is computed by ared, which has however no practical effect on the resulting closed-loop dynamics, because the closed-loop eigenvalues computed in CLSEIG are the same as the eigenvalues of sqrt(β)*A. Therefore, the usefulness of this result seems to me highly questionable.

  4. If R is replaced by -R and Q = I is used, then at least one of the closed loop eigenvalue will be improved.

3 Likes
  1. Thank you very much for taking the time to write your feedback.
    The whole point of this post is to bring together people from different fields who solve intertemporal optimization problems.
    Nobel laureate Tom Sargent brought many tools from engineering (LQR, Kalman filter) to 1st year graduate economics.
    I wish we all collaborated on solving control problems.
    For those who don’t know, @andreasvarga is an author of the Systems and Control Library SLICOT implemented in Fortran. He has shown that comparable performance can be achieved in Julia.
  2. The deterministic discrete time problem I seek to solve (the simplest I could think of):
    V(x_{t}) = \max_{u} \sum_{i=0}^{i=\infty} \beta^{i} \left( z \times x_{t+i} - u_{t+i} -\frac{1}{2} u_{t+i}^{2} \right)
    subject to: x_{t+1} = (1-\delta) x_{t} + u_{t}
    Initial condition: x_{0} given
    Terminal condition: \lim_{t \to \infty} \beta^{t} \lambda_{t} x_{t+1} =0
    Where:
    State variable: x_{t} {my current stock of capital}
    Control variable: u_{t} {capital investment}
    Parameters: \delta \in (0,1), \rho >0 \Rightarrow \beta \equiv\frac{1}{1+\rho} \in (0,1), z>\rho +\delta
    Reward function: z \times x_{t} - u_{t} -\frac{1}{2} u_{t}^{2} {profit = revenue - cost of investment}
    Note: investment has two costs, purchase cost u_{t} and quadratic installation cost \frac{1}{2} u_{t}^{2}
    Note: maximizing f is equivalent to minimizing -f, we can rewrite the sequential problem:
    V(x_{t}) = \min_{u} \sum_{i=0}^{i=\infty} \beta^{i} (-1)\times \left( z \times x_{t+i} - u_{t+i} -\frac{1}{2} u_{t+i}^{2} \right)
    Unlike the previous note: I will proceed by formulating this as a minimization problem
    Bellman equation:
    V(x_{t}) = \min_{u} \left\{ -1 \times \left( z \times x_{t} - u_{t} -\frac{1}{2} u_{t}^{2}\right) + \beta V(x_{t+1}) \right\}
    x_{t+1} = (1-\delta) x_{t} + u_{t}
    This simple problem has a closed form solution:
    Policy function: u(x_{t}) = \frac{z}{\rho + \delta} -1
    Value function: V(x_{t})= -\left( V_{ss} - Q_{ss} K_{ss} \right) - z \frac{1 + \rho }{\delta + \rho} x_{t} {find V_{ss}, Q_{ss}, K_{ss} in posts above}
  3. Since this problem has a quadratic reward/cost function and a linear transition function, it can be formulated as a discrete time LQR.
    Define the augmented state vector \hat{x}_{t}' \equiv \begin{bmatrix} 1 & x_{t}\end{bmatrix}
    f(x_{t}, u_{t}) = \hat{x}_{t}' Q \hat{x}_{t} + u_{t}' R u_{t} + 2 \hat{x}_{t}' S u_{t} = -1 \times \left( z \times x_{t} - u_{t} -\frac{1}{2} u_{t}^{2}\right)
    where:
    Q= -1 \times \begin{bmatrix} 0 & z/2\\ z/2 & 0 \end{bmatrix} \Rightarrow \hat{x}_{t}' Q \hat{x}_{t} = -1 \times\begin{bmatrix} 1 & x_{t}\end{bmatrix} \begin{bmatrix} 0 & z/2\\ z/2 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ x_{t}\end{bmatrix} = -z \times x_{t}
    R= -1 \times \begin{bmatrix} -\frac{1}{2} \end{bmatrix} \Rightarrow u_{t}' R u_{t} =\frac{1}{2} u_{t}^{2}
    S= -1 \times\begin{bmatrix} -1/2 \\ 0 \end{bmatrix} \Rightarrow 2 \hat{x}_{t}' S u_{t} = -1 \times2 \begin{bmatrix} 1 & x_{t}\end{bmatrix} \begin{bmatrix} -1/2 \\ 0 \end{bmatrix} \begin{bmatrix} u_{t}\end{bmatrix} = u_{t}
    x_{t+1} = g(x_{t}, u_{t}) = A \hat{x}_{t} + B u_{t} = (1-\delta) x_{t} + u_{t}
    where
    A = \begin{bmatrix} 1 & 0 \\ 0 & (1-\delta)\end{bmatrix}, B = \begin{bmatrix} 0 \\ 1\end{bmatrix} \Rightarrow \begin{bmatrix} 1 \\ x_{t+1} \end{bmatrix} = A \hat{x}_{t} + B u_{t}= \begin{bmatrix} 1 \\ (1-\delta) x_{t} + u_{t} \end{bmatrix}
    Some LQR solvers (like QuantEcon.jl) take the discount factor \beta as input, MatrixEquations.jl does not.
    Following your advice, define A=\sqrt{\beta}A, B=\sqrt{\beta}B.
using MatrixEquations, LinearAlgebra; 

δ=0.1; ρ=0.05; z= ρ + δ +.02; β=1/(1+ρ); # parameters

A= √β*[1.0 0.0; 0.0  (1.0 - δ)];
B= √β*[0.0; 1.0];
Q=-1*[0.0 z/2; z/2 0.0];
R=-1*[-0.5;;];  
S=-1*[-1/2; 0.0];

P, CLSEIG, F = ared(A,B,R,Q,S);

# Issue 1: both eigenvalues of A-B*F are less than 1. 
# I don't understand the problem???
julia> CLSEIG
2-element Vector{ComplexF64}:
 0.8783100656536799 - 0.0im
 0.9759000729485324 - 0.0im

julia> eigvals(A-B*F)
2-element Vector{Float64}:
 0.8783100656536799
 0.9759000729485332

julia> eigvals(A)
2-element Vector{Float64}:
 0.8783100656536799
 0.9759000729485332


# Issue 2: R is positive definite & Q is indefinite.
# is that a problem? 
julia> eigvals(R)
1-element Vector{Float64}:
 0.5
#
julia> eigvals(Q)
2-element Vector{Float64}:
 -0.085
  0.085

# Issue 3: if ared() gives the wrong answer, 
# why is it exactly equal to the closed form solution?
I_SS = (z-ρ-δ)/(ρ+δ);
K_SS = I_SS/δ;
D_SS = z*K_SS - I_SS - 0.5*(I_SS)^(2.0);
V_SS = ((1.0+ρ)/ρ)*D_SS;
Q_SS = (1+ρ)*(1+I_SS);

P_sol = -1*[(V_SS - Q_SS*K_SS) Q_SS/2; Q_SS/2   0.0]; # closed form P. Value = x'*P*x
F_sol = [-I_SS   0.0];                                # closed form u. Policy = -F_sol*x

julia> P
2×2 Matrix{Float64}:
 -0.186667  -0.595
 -0.595     -0.0

julia> P_sol
2×2 Matrix{Float64}:
 -0.186667  -0.595
 -0.595     -0.0

julia> F
1×2 Matrix{Float64}:
 -0.133333  -0.0

julia> F_sol
1×2 Matrix{Float64}:
 -0.133333  0.0

I would love to get to the bottom of this.

Update: while the matrix Q above is indeed indefinite, recall the augmented state vector is \hat{x}_{t}' \equiv \begin{bmatrix} 1 & x_{t}\end{bmatrix}, hence \hat{x}_{t}' Q \hat{x}_{t} = -z \times x_{t} for any x_{t}.
Economists are only interested in the case where capital is strictly positive x_{t}>0, hence the quadratic form will always be strictly negative \hat{x}_{t}' Q \hat{x}_{t} = -z \times x_{t}<0.
Not sure if this is relevant.

Also: the condition Q >= 0 is sufficient, but not necessary.

Also: I’m away from computer but should compute eigvals(A-B*F) dividing A, B by sqrt(beta) to undo normalization.
I think, Will get one Eigenvalue <1, one =1.
I think due to dummy state variable =1

1 Like

I confess I did not go through all the technical details of this (and the previous) posts, but I suspect that one issue here might be related to what is the goal of the optimization. In (optimal) control theory, it is conventional to formulate the problem of optimal control as minimization of cost. See the classical monographs by Kwakernaak & Sivan, Athans & Falb, Bryson & Ho, Kirk, Lewis, …

Indeed, this is one of the conceptual clashes between the optimal control theory as developed within control theory and reinforcement learning as originally developed within computer science, because in RL they are maximizing the value.

As a consequence, when the sum-of-squares type of a cost function is considered in control theory texts and software, namely,

J(x_0, u_0,u_1,\ldots,u_{N-1}) = \frac{1}{2}x_N^\top S x_N + \frac{1}{2} \sum_{k=0}^{N-1} \left [x_k^\top Q x_k + u_k^\top R u_k\right ]

with x_{k+1} = f(x_k,u_k), \; x_0 \; \mathrm{given}, it is necessary to impose nonnegativity conditions on the weigting matrices S, Q, R, that is,

S\succeq 0, \; Q \succeq 0, \; R \succeq 0.

The condition is strenghtened to R\succ 0 for some scenarios such as the ARE-based solution to LQ-optimal control problem (the cost function is as above but the system dynamics is linear) when no constraint on the magnitude of u_k is imposed.

To summarize, while applying classical results originating in optimal control theory such as LQ-optimal control, you should be aware that the optimal control problem has surely been formulated as a minimization of some cost. Reformulation to maximation is straightforward, extension to saddle point optimization has also been done (LQ differential games).

2 Likes

I just did a test drive w/ @davidanthoff’s dynamic programming solver Judyp.jl.
WOW!
Here are two simulations: w/ capital starting below (left) & above (right) the steady state
image image

using Judyp, Plots
par=(δ=0.1, ρ=0.05, z= 0.05 + 0.1 +.02, β=1/(1+0.05));

I_SS = (par.z-par.ρ-par.δ)/(par.ρ+par.δ);
K_SS = I_SS/par.δ;
K0 = 0.5*K_SS; # start below SS
K0 = 1.5*K_SS; # start above SS

p1 = DynProgProblem(); 

set_transition_function!(p1) do k, k_new, x, up, p
    k_new[1] = (1-p.δ)*k[1] + x[1] 
end

set_payoff_function!(p1) do s, x, p
    p.z*s[1] - x[1] - 0.5*(x[1])^(2.0)
end

set_discountfactor!(p1, par.β)

set_exogenous_parameters!(p1, par)

add_state_variable!(p1,Symbol("K_1"),K0,0.0,3.0,10) #10 nodes [0,3]

add_choice_variable!(p1,Symbol("I_1"),0., 1.) # bounds [0,1] initial 0.5

T_sim = 150; N_sim=0; # number Monte Carlo runs

res1 = solve(p1) # res = solve(problem, solvers=[solver], print_level=1)
simres1 = simulate(res1, T_sim, N_sim);

plot(legend=:topright, ylims = (0,2))
plot!(simres1.state_vars[:],  c=1, lab="k")
plot!(1:T_sim, tt -> K_SS, c=1, l=:dash, lw=3, lab = "K_SS")
plot!(simres1.choice_vars[:], c=2, lab="i")
plot!(1:T_sim, tt -> I_SS, c=2, l=:dash, lw=3, lab = "K_SS")

It would be awesome to benchmark w/ VFIToolkit.m and @zsunberg’s POMDPs.jl and @odow’s SDDP.jl.

1 Like

I solved the Discrete time, Deterministic, infinite horizon version w/ SDDP.jl.

using SDDP, Plots
import Ipopt

###################################################
δ, ρ = 0.1, 0.05; β = 1 / (1 + ρ);
z = ρ + δ + 0.02;
u_SS = (z - ρ - δ) / (ρ + δ);
x_SS = u_SS / δ;
x0 = 0.5 * x_SS;
x0 = 1.5 * x_SS;

# f(x,u;δ=δ) = (1 - δ)*x + u; # transition fcn 
# F(x,u;z=z) = z*x -u -0.5*u^2; # payoff fcn
opt = Ipopt.Optimizer;
T_sim=100;
#####################################################
graph = SDDP.LinearGraph(1)
SDDP.add_edge(graph, 1 => 1, β)

model = SDDP.PolicyGraph(
    graph; sense = :Max, upper_bound = 1000, optimizer = opt,
) do sp, _
    @variable(sp, x, SDDP.State, initial_value = x0) # state variable
    @variable(sp, u)                                 # control variable
    @constraint(sp, x.out == (1 - δ) * x.in + u)     # transition fcn 
    @stageobjective(sp, z * x.in - u - 0.5 * u^2)    # payoff fcn 
    return
end

SDDP.train(model; iteration_limit = 10)

sims = SDDP.simulate(model, 1, [:x, :u];
    sampling_scheme = SDDP.Historical([(1, nothing) for t in 1:T_sim])
);

sim_K = map(data -> data[:x].out, sims[1])
sim_i = map(data -> data[:u], sims[1])

plot(legend=:topright, ylims = (0,2))
plot!(sim_K,  c=1, lab="K")
plot!(1:T_sim, tt -> x_SS, c=1, l=:dash, lw=3, lab = "K_SS")
plot!(sim_i, c=2, lab="i")
plot!(1:T_sim, tt -> u_SS, c=2, l=:dash, lw=3, lab = "i_SS")
3 Likes

SDDP was designed to be discrete+stochastic. The S in SDDP stands for Stochastic.
In order to converge smoothly, it requires convexity and stage-independence of random variables.
There are generalizations to handle more general cases in exchange for performance.
Note that many practical non-convex problems with stagewise dependent random variables are solved approximately with the SDDP algorithm every day to operate multiple power systems in many countries.

2 Likes

Here is the solution using InfiniteOpt.jl.
Even though this package is intended to solve finite horizon models, we can set the Salvage value equal to the value function in the infinite horizon case (which I solved in closed form above).

using InfiniteOpt, Ipopt, Plots
δ=0.10;  # capital depreciation
ρ=0.05;  # discount rate
T = 90.0 # life horizon
z=ρ+δ +0.02    
# z > ρ + δ # Need: z > ρ + δ
I_SS = (z-ρ-δ)/(ρ+δ); 
K_SS = I_SS/δ;
D_SS = z*K_SS - I_SS - 0.5*(I_SS)^(2.0);
V_SS = ((1.0)/(ρ))*D_SS;
Q_SS = (1+I_SS);
k0 = 0.5K_SS # endowment

u(k,i;z=z) = z*k -i -(1/2)*(i)^2 # utility function
S(k) = exp(-ρ*T)*(V_SS - Q_SS*K_SS + Q_SS*k)
discount(t; ρ=ρ) = exp(-ρ*t)     # discount function
BC(k, i; δ=δ) = i - δ*k          # LOM 

opt = Ipopt.Optimizer            # desired solver
ns = 10_000;                     # number of gridpoints
m = InfiniteModel(opt)
@infinite_parameter(m, t ∈ [0, T], num_supports = ns)
@variable(m, k, Infinite(t)) ## state variables
@variable(m, i, Infinite(t)) ## control variables
@objective(m, Max, integral(u(k,i), t, weight_func = discount) + S(k(T)))
@constraint(m, c1, deriv(k, t) == BC(k, i; δ=δ))
@constraint(m, c2, k >= 0)                   # Can't sell more than all your capital.
@constraint(m, k == k0, DomainRestrictions(t => 0))
@constraint(m, i(T) >=-1)
@constraint(m, k(T) >=0)


optimize!(m)
termination_status(m)

i_opt = value(i)
k_opt = value(k)
ts = supports(t)
opt_obj = objective_value(m) # V(k0, 0)

ix = 2:(length(ts)-1) # index for plotting
#
plot(legend=:topright, ylims=(0,2));
plot!(ts[ix], k_opt[ix], color = 1, lab = "k: InfiniteOpt")
plot!(ts[ix], tt -> K_SS, color = 1, lab = "K_SS", l=:dash, lw=3)
plot!(ts[ix], i_opt[ix], color = 2, lab = "i: InfiniteOpt")
plot!(ts[ix], tt -> I_SS, color = 2, lab = "I_SS", l=:dash, lw=3)

image

1 Like

It would be awesome to see if the discrete-time version of this problem can be solved with @Norman package SequenceJacobians.jl.

It’s a wip but I was able to figure out how to solve the simple RBC model w/ his pkg in the examples:

using SequenceJacobians, NLsolve;

# Define RBC Model ################################################################
@simple function firm(K, L, Z, α, δ)
    r = α * Z * (lag(K) / L)^(α-1) - δ
    w = (1-α) * Z * (lag(K) / L)^α
    Y = Z * lag(K)^α * L^(1-α)
    return r, w, Y
end

@simple function household(K, L, w, eis, frisch, φ, δ)
    C = (w / (φ * L^(1/frisch)))^eis
    I = K - (1 - δ) * lag(K)
    return C, I
end

@simple function mkt_clearing(r, C, Y, I, K, L, w, eis, β)
    goods_mkt = Y - C - I
    euler = C^(-1/eis) - β*(1+lead(r))*lead(C)^(-1/eis)
    walras = C + K - (1+r)*lag(K) - w*L
    return goods_mkt, euler, walras
end

rbcblocks() = (firm_blk(), household_blk(), mkt_clearing_blk())
################################################################
m = model(rbcblocks()) # vsrcs(m) == [:K, :L, :Z, :α, :δ, :eis, :frisch, :φ, :β]
calis = [:L=>1, :eis=>1, :frisch=>1, :δ=>0.025, :α=>0.11]
tars = [:goods_mkt=>0, :r=>0.01, :euler=>0, :Y=>1]
inits = [:φ=>0.9, :β=>0.99, :K=>2, :Z=>1]

# [:K init 2, :L=1, :Z init 1.0, :α=0.11, :δ=0.025, :eis=1, :frisch=1, :φ init 0.9, :β init 0.99]

ss = SteadyState(m, calis, inits, tars)
f!(y, x) = residuals!(y, ss, x)
r = solve!(NLsolve_Solver, f!, ss.inits) # r[1] # getval(ss, :K)

J = TotalJacobian(model(ss), [:Z,:K,:L], [:euler, :goods_mkt], getvarvals(ss), 300)
GJ = GEJacobian(J, :Z)
dZ = zeros(300)
dZ[11:end] .= getvarvals(ss)[:Z] .* 0.01 .* 0.8.^(0:289)
irfs = linirf(GJ, :Z=>dZ)
dCdZ = irfs[:Z][:C]
dwdZ = irfs[:Z][:w]
dYdZ = irfs[:Z][:Y]

ε = randn(10_000)
# simulate(GJ,exovar, endovar, ε, ρ,   σ=1.0; kwargs...)
s = simulate(GJ, :Z, :K,     ε, 0.9)
s = simulate(GJ, :Z, :K,     ε, 0.9, 1.00)
#
s = simulate(GJ, :Z, :K,     ε, 0.9, 0.02)
s = simulate(GJ, :Z, :K,     ε, 0.9, 0.00)
s = simulate(GJ, :Z, :K,     ε, 0.9, 0.10)
#
s = simulate(GJ, :Z, :K,     ε, 0.2, 0.02)
s = simulate(GJ, :Z, :K,     ε, 0.2, 0.00)
s = simulate(GJ, :Z, :K,     ε, 0.2, 0.10)
1 Like

Hi, @Albert_Zevelev! Thanks for your interest.

What I was experimenting with SequenceJacobians.jl is essentially a Julia implementation of the Python package shade-econ/sequence-jacobian developed by the original authors who wrote the paper. You might want to read their Econometrica paper to figure out what these packages are about. I wrote the Julia code in order to help me understand exactly how their methods work in practice (and also because I am less proficient in Python for more complicated work).

1 Like

Thanks. I’m not having luck applying it to the following very simple model (w/o equilibrium prices).

State variables: K_{t}, Z_{t} {capital stock & tech shock}
Control variables: I_{t} {firm investment}

LOM K: K_{t+1} = I_{t} + (1-\delta ) * K_{t}
LOM Z: Z_{t+1} = (1- \rho_{z} )\mu_{z} + (\rho_{z} ) * Z_{t} +\sigma_{z} \varepsilon_{t}, \varepsilon_{t} \sim_{iid} N(0,1)
EE: I_{t+1} = \frac{\rho + \delta - Z_{t}}{1 - \delta} + \frac{1 +\rho }{1-\delta} I_{t}
note: this model has a unique steady-state

@Norman I got your pkg to work, but needed to push investment 1-period forward.
(to be clear, the pkg no longer gives an error & solves for the correct steady state, I’m sure more things need to be corrected…)

using SequenceJacobians, NLsolve;
@simple function firm(K, δ)
    I = K - (1 - δ) * lag(K)
    return I
end
@simple function mkt_clearing(Z, I, δ, ρ)
    euler = lead(I)  - ((ρ+δ-Z)/(1-δ))  - ((1+ρ)/(1-δ)) * I
    return euler 
end
rbcblocks() = (firm_blk(), mkt_clearing_blk())
m = model(rbcblocks()) # vsrcs(m) == [:K, :δ, :Z, :ρ]
calis = [:δ=>.10, :ρ=>0.05, :Z=>.17]
tars = [:euler=>0] # 
inits = [:K=>1]
ss = SteadyState(m, calis, inits, tars)
f!(y, x) = residuals!(y, ss, x)
r = solve!(NLsolve_Solver, f!, ss.inits) # r[1] # getval(ss, :K)

# julia> getvarvals(ss)
# (K = 1.33, ρ = 0.05, Z = 0.17, δ = 0.1, I = 0.1333, euler = 1.345e-7)

n_irf = 300; #n_irf = 2_000;

J = TotalJacobian(model(ss), [:K,:Z], [:euler], getvarvals(ss), n_irf)
GJ = GEJacobian(J, :Z)
dZ = zeros(n_irf)
dZ[11:end] .= getvarvals(ss)[:Z] .* 0.01 .* 0.8.^(0:(n_irf-11))
irfs = linirf(GJ, :Z=>dZ)

dKdZ = irfs[:Z][:K]
dIdZ = irfs[:Z][:I]


using Plots; 
plot(dKdZ); plot!(dIdZ)


##############################
#SIM 
##############################
n_sim = 10_000;
ε = randn(n_sim)
#ε = zeros(n_sim)
# simulate(GJ,exovar, endovar, ε, ρ,   σ=1.0; kwargs...)
plot()
simulate(GJ, :Z, :K,     ε, 0.9)        |> plot! 
simulate(GJ, :Z, :K,     ε, 0.9, 1.00)  |> plot! #verify default value σ=1
#
ρ_Z = 0.01; σ_Z = 1e-4;
plot()
simulate(GJ, :Z, :K,     ε, ρ_Z, 9*σ_Z) |> plot!
simulate(GJ, :Z, :K,     ε, ρ_Z, 2*σ_Z) |> plot!
simulate(GJ, :Z, :K,     ε, ρ_Z, 0*σ_Z) |> plot!
#
ρ_Z = 0.99; σ_Z = 1e-4;
plot()
simulate(GJ, :Z, :K,     ε, ρ_Z, 9*σ_Z) |> plot!
simulate(GJ, :Z, :K,     ε, ρ_Z, 2*σ_Z) |> plot!
simulate(GJ, :Z, :K,     ε, ρ_Z, 0*σ_Z) |> plot!

Some questions:
Q1: Is it possible to use your package to simulate starting from K_{0} = 1.5\ \times K_{ss}?
So I get something like the left two quadrants:
image

Q2: is it possible to use your package to plot the policy function for the firm’s optimal investment decision?
Q3: plot the value function?

PS: I found other SSJ Julia implementations here

I forgot to include:
GDSGE (in C++ & Matlab) seems like an impressive package for global solutions (using PFI).

Hi,

I have another package to add which solves discrete time dynamic optimisation problems with perturbation methods: MacroModelling.jl

Solving the firm investment problem in it’s deterministic version would look like this:

using MacroModelling

@model firm_investment_problem begin
    K[0] = (1 - δ) * K[-1] + I[0]
    Z[0] = (1 - ρ) * μ + ρ * Z[-1] 
    I[1]  = ((ρ + δ - Z[0])/(1 - δ))  + ((1 + ρ)/(1 - δ)) * I[0]
end

@parameters firm_investment_problem begin
    ρ = 0.05
    δ = 0.10
    μ = .17
    σ = .2
end

steady_state = get_steady_state(firm_investment_problem)

plot(firm_investment_problem, initial_state = collect(steady_state(:,:Steady_state)) .* [1, 1.5, 1])


The @model macro is used to define the model equations and parameters are declared in the @parameters macro. The package then takes care of variable and parameter declaration and solves the steady state symbolically (if possible and numerically otherwise) for you.

The get_steady_state function returns the steady state and its derivatives wrt model parameters and the plot function with the initial_state for K set 50% higher delivers the simulations seen above.

There are about 13 DSGE models (some as large as 100 equations) already implemented using this package. The package has a lot of standard output implemented (autocorrelation, forecast error variance decomposition, variance, …) and can solve models up to third order using perturbation methods. Check out the documentation for more details.

5 Likes

There is a new package:

Presented at last year’s JuliaCon:

1 Like

New package for MDPs in Julia: