I am wondering how to use ApproxFun for dynamic programming. I know the methods themselves very well, and I can easily implement a Chebyshev or spline basis and do everything from scratch, but it would be nice to use existing, mature code (this is for teaching). So the question is about how to make use of the framework of ApproxFun
to the largest extent (or whether I should do something else).
To make things concrete, consider the Bellman equation of a deterministic growth model:
V(k) = max_c [ u(c) + Ī² V(f(k)-c) ]
where u
and f
are increasing concave functions (given) and Ī²
is a given constant. Usually one approximates V
in some space (eg Chebyshev, splines), then either solves with
- Value iteration: for each
k
on some grid, maximize the RHS, approximate it, and iterate. This is slow but guaranteed (if functions are āniceā), sinceĪ²<1
it is a contraction. InApproxFun
, I can do this by using constructors with specified gridpoints (what the documentation calls Using ApproxFun for āmanualā interpolation). - Policy iteration: solve for
c
as a function ofk
on a grid, then givenc
everything else is linear, so one can use \ as described in theApproxFun
manual. - Projection: instead of the Bellman equation above in
V
, use the Euler equation
u'(c(k)) = Ī² f'(k) u'(f(k)-c(k))
where c(k)
is the unknown function, again approximated in some function space (eg Chebyshev, splines, linear interpolation). Decide on an objective (Galerkin, Least Squares, collocation), and minimize the disrepancy between RHS and LHS.
The problems I am running into: it is my impression that ApproxFun
always tries go give me a very good approximation, with the assumption that the function approximated is relatively cheap. For dynamic programming, the function is relatively expensive, and since all procedures are iterative, it is wasteful to go for extremely good approximations anyway in each iteration step (until we are nearing convergence). So mostly I am using ApproxFun
as a collection of basis functions.