Using ApproxFun for dynamic programming

I am wondering how to use ApproxFun for dynamic programming. I know the methods themselves very well, and I can easily implement a Chebyshev or spline basis and do everything from scratch, but it would be nice to use existing, mature code (this is for teaching). So the question is about how to make use of the framework of ApproxFun to the largest extent (or whether I should do something else).

To make things concrete, consider the Bellman equation of a deterministic growth model:

V(k) = max_c [ u(c) + Ī² V(f(k)-c) ]

where u and f are increasing concave functions (given) and Ī² is a given constant. Usually one approximates V in some space (eg Chebyshev, splines), then either solves with

  1. Value iteration: for each k on some grid, maximize the RHS, approximate it, and iterate. This is slow but guaranteed (if functions are ā€œniceā€), since Ī²<1 it is a contraction. In ApproxFun, I can do this by using constructors with specified gridpoints (what the documentation calls Using ApproxFun for ā€œmanualā€ interpolation).
  2. Policy iteration: solve for c as a function of k on a grid, then given c everything else is linear, so one can use \ as described in the ApproxFun manual.
  3. Projection: instead of the Bellman equation above in V, use the Euler equation
u'(c(k)) = Ī² f'(k) u'(f(k)-c(k)) 

where c(k) is the unknown function, again approximated in some function space (eg Chebyshev, splines, linear interpolation). Decide on an objective (Galerkin, Least Squares, collocation), and minimize the disrepancy between RHS and LHS.

The problems I am running into: it is my impression that ApproxFun always tries go give me a very good approximation, with the assumption that the function approximated is relatively cheap. For dynamic programming, the function is relatively expensive, and since all procedures are iterative, it is wasteful to go for extremely good approximations anyway in each iteration step (until we are nearing convergence). So mostly I am using ApproxFun as a collection of basis functions.

1 Like

Do you have code that works? Having a profile of whatā€™s too slow would help to answer.

Are you asking for this?

https://github.com/JuliaApproximation/ApproxFun.jl/issues/459

1 Like

A self-contained example for a single step of value iteration is below. value_iteration is the ā€œnaiveā€ approach, while value_iteration_grid just evaluates on a grid. The former is very slow, the latter is OK (I did not optimize anything).

I think the question boils down to this: if I just want to use ApproxFun as a way to manage approximation using basis functions, then it seems that I am fighting against the natural (and very nice!) approach it was designed for (construction of basis matrices is tedious, etc). So maybe I should just write my own functions for parallel functionality.

using ApproxFun
using Parameters
using Optim

# we are solving
# V(k) = max_c u(c) + Ī² V(f(k)-c) where f(k) = A k^Ī± + (1-Ī“)*k
# with CRRA u(c)

# Euler equation:
# FOC u'(c) = Ī² V'(f(k)-c)
# ENV V'(k) = Ī² V'(f(k)-c) f'(k)
# impose steady state f(k)-c = k
# u'(c) = Ī² u'(c) f'(k) => Ī² f'(k) = 1
# then AĪ± k^(Ī±-1) - Ī“ =1  <=> k = ((AĪ±)/(1-Ī“))^(1/(1-Ī±))

immutable GrowthModel
    Ī±::Float64                        # capital share
    Ī“::Float64                        # depreciation
    A::Float64                        # productivity
    Ļƒ::Float64                        # CRRA utility coefficient
    Ī²::Float64                        # discount factor
end

CRRA_utility(c,Ļƒ) = Ļƒ == 1 ? log(c) : (c^(1-Ļƒ))/(1-Ļƒ)

function steady_state(model::GrowthModel)
    @unpack A, Ī±, Ī“ = model
    ((A*Ī±)/(1-Ī“))^(1/(1-Ī±))
end

function optimal_c_rhs(model::GrowthModel, k, V)
    @unpack A, Ī±, Ī“, Ī², Ļƒ = model
    y = A*k^Ī± + (1-Ī“)*k
    opt = optimize(c -> CRRA_utility(c, Ļƒ) + Ī² * V(y-c),
                   max(0.0, y-domain(V).b), y-max(0, domain(V).a))
    Optim.converged(opt) || error("did not converge")
    (Optim.minimizer(opt), Optim.minimum(opt))
end

function value_iteration(model::GrowthModel, V)
    Fun(k->optimal_c_rhs(model, k, V)[2], domain(V))
end

function basis_matrix(S, p, M=length(p))
    N = length(p)
    V = Array{Float64}(N,M)
    for k = 1:M
        V[:,k] = Fun(S,[zeros(k-1);1]).(p)
    end
    V
end

function value_iteration_grid(model::GrowthModel, V, N)
    d = domain(V)
    k_grid = linspace(d.a, d.b, N)
    V_grid = map(k->optimal_c_rhs(model, k, V)[2], k_grid)
    S = Chebyshev(d)
    Fun(S, basis_matrix(S, k_grid) \ V_grid)
end


model = GrowthModel(0.3, 0.05, 1, 1, 0.95)

# initialize
ss = steady_state(model)
V = Fun(identity, (0.5*ss)..(2*ss))

# single step -- this is very slow (127s, 5.27G allocated)
@time value_iteration(model, V)

# just evaluating in a grid, OK
V = value_iteration_grid(model, V, 10)

Possibly not, with

function value_iteration_nonadapt(model::GrowthModel, V, N)
    Fun(k->optimal_c_rhs(model, k, V)[2], domain(V), N)
end

I get an oscillating behavior, not convergence.

The constructor is very unoptimized at the moment. There are a few changes that may help:

  1. Reuse the values. At the moment, itā€™s doubling the grid until convergence is verified, but itā€™s re-evaluating the function at the same grid over and over. This would be a fairly simple fix.
  2. Convergence tolerance. Thereā€™s a hard coded tol In the constructor, that could easily be changed to a keyword. This is not going to make as much of a difference as you might think, as analytic functions converge exponentially fast.
  3. Better convergence verification. Chebfun has a fairly complicated heuristic that could be copied. ApproxFun just naively looks at whether the tail is below the tolerance.
1 Like

Thanks for your help. I realized that my approach is by necessity more manual, and harder to automate with a high-level library like ApproxFun. What would really help me is the following interface:

  1. representation for a basis with a given number of functions (eg the first n Chebyshev polynomials, tent functions for FEM with given nodes),
  2. when applicable, its ā€œnaturalā€ colocation points (eg Chebysev zeros, nodes of tent functions),
  3. a way to construct an approximated function-like object g from a set of (x,f(x)), especially when x is the points in 2. above (which may not require matrix inversion, eg for Chebyshev),
  4. quick evaluation of g(x), and possibly g'(x) and g''(x) (for optimization), but going through AD is fine too.

I could do all this with a thin wrapper around ApproxFun constructs, I am trying to decide whether to do this or just write it from scratch.

I have been thinking about this topic for a while (teaching a short course on numerical methods to economists in the meantime), and while I wrote my own code for the course for pedagogical purposes, I still think it would be nice to use ApproxFun in general. The usual requirements of numerical problems in economics boil down to the following:

  1. Having a function basis B, and a set of coefficients a, calculate the approximated function F(B, a) at arbitrary points (in a grid), fast. Not always the same points, as the calculation involves optimization steps which depend on a. The calculation can be used to compute some residual function g(F), which can be used for collocation, Galerkin, etc. B can be tensored univariate bases, or something more sophisticated to reduce the size of a for problems with many dimensions, eg complete polynomials, Smolyak.
  2. Given a set of points on some grid, fit a. This is used in value iteration: projection methods are faster, but one needs a good initial guess.
  3. Transform domains and function values. Domain transformations, such as (-1,1) to (0,Inf), are common. Range transformations are sometimes needed, eg in a savings problem where the fraction of assets saved is in (0,1).
  4. Have basis functions which are crude, but preserve monotonicity and convexity, eg linear interpolation (eg linear splines). These have slow convergence, but are essential when the problem is non-smooth.

My understanding is that

  1. ApproxFun provides a nice DSL for bases and representing functions. But the interface is very clever, trying to hide a lot from the user, and uses adaptive methods, so one would need to build a different API, along the lines of ā€œjust calculate, donā€™t re-approximate unless I ask for itā€. But the core is basically there, just a thin layer of syntax is need.

  2. Fitting can be done, and is described in the ā€œmanualā€ example of the docs. The API could be made easier though.

  3. Transformations are trivial to add.

  4. Crude linear bases are missing, the library focuses on spectral methods.

So I am unsure whether to contribute to ApproxFun or start something different. A lot of great code is already implemented in it, and duplicating it seems silly. But OTOH I am not sure PRs with a parallel API and weird bases would be appreciated.

Iā€™m having trouble understanding the first paragraph but it sounds like you just need function evaluation, which definitely works in ApproxFun. The rest is there too I think for the most part:

  1. the domain Ray() represents functions on 0ā€¦Inf using a map to -1ā€¦1
  2. SplineSpace represents general splines (not sure how much is implemented)

Iā€™d be happy to take PRs, or help you with the code. Though I think some of what you mentioned could live in a separate package (for example, Iā€™m not sure collocation should be built into ApproxFun itself)

Most of what I need is already in ApproxFun, what I need is a way to manage the bookkeeping. Knowing how to do the following (or that there is no support for it and I have to write a thin layer) would be useful:

  1. How to denote a family of basis functions with a given number of elements in ApproxFun? Eg Interval(0,1) would give me a Chebyshev basis appropriately transformed, but is there a way to say ā€œthis basis uses the first 10 Chebyshev polynomialsā€?

  2. Return the ā€œnaturalā€ collocation points for a basis, eg Chebyshev nodes for Chebyshev polynomials, nodes + extra points for splines, etc.

  3. How to provide coefficients for product spaces using the Fun constructor?

As an alternative to ApproxFun I have a package SparseGrids which I use for solving mid/high dimensional economic models. It uses linear or quadratic basis functions though the latter can be unstable if there are excessive kinks in the approximated functions.

Thereā€™s actually a space SubSpace(::space,rng) that can represent finite dimensional spaces. | is overriden as a shorthand:

julia> Chebyshev()|(1:3)
Chebyshev(怐-1.0,1.0怑)|1:3

julia> import ApproxFun: āˆž

julia> Chebyshev()|(1:2:āˆž) # odd functions
Chebyshev(怐-1.0,1.0怑)|1:2:āˆž

Not much functionality is overloaded yet, so definite room for a PR.

points(sp,n) is the right routine:

julia> points(Chebyshev(),10)
10-element Array{Float64,1}:
  0.987688
  0.891007
  0.707107
  0.45399 
  0.156434
 -0.156434
 -0.45399 
 -0.707107
 -0.891007
 -0.987688

Do you mean as a matrix of coefficients? There is support, look at coefficientmatrix, fromtensor and totensor.

1 Like