I am wondering how to use ApproxFun for dynamic programming. I know the methods themselves very well, and I can easily implement a Chebyshev or spline basis and do everything from scratch, but it would be nice to use existing, mature code (this is for teaching). So the question is about how to make use of the framework of
ApproxFun to the largest extent (or whether I should do something else).
To make things concrete, consider the Bellman equation of a deterministic growth model:
V(k) = max_c [ u(c) + β V(f(k)-c) ]
f are increasing concave functions (given) and
β is a given constant. Usually one approximates
V in some space (eg Chebyshev, splines), then either solves with
- Value iteration: for each
kon some grid, maximize the RHS, approximate it, and iterate. This is slow but guaranteed (if functions are “nice”), since
β<1it is a contraction. In
ApproxFun, I can do this by using constructors with specified gridpoints (what the documentation calls Using ApproxFun for “manual” interpolation).
- Policy iteration: solve for
cas a function of
kon a grid, then given
ceverything else is linear, so one can use \ as described in the
- Projection: instead of the Bellman equation above in
V, use the Euler equation
u'(c(k)) = β f'(k) u'(f(k)-c(k))
c(k) is the unknown function, again approximated in some function space (eg Chebyshev, splines, linear interpolation). Decide on an objective (Galerkin, Least Squares, collocation), and minimize the disrepancy between RHS and LHS.
The problems I am running into: it is my impression that
ApproxFun always tries go give me a very good approximation, with the assumption that the function approximated is relatively cheap. For dynamic programming, the function is relatively expensive, and since all procedures are iterative, it is wasteful to go for extremely good approximations anyway in each iteration step (until we are nearing convergence). So mostly I am using
ApproxFun as a collection of basis functions.