Help speeding up finite horizon dynamic programming problem

You definitely don’t need your own optimizer for this. But if you look at the value function that this is spitting back, there are a bunch of -Inf terms at the low end of the asset distribution.

There’s no way a model like this needs to run for hours. Unless you have a lot of state variables, it should be able to run in seconds.

@nilshg @jacobadenbaum grid_A has 901 points. In the full model I also incorporate random income shocks, I was integrating them with 5 points of their distribution (that is where the n comes into play).

@nilshg I’ll check that out!

@jacobadenbaum you mean with T=65 it should run in seconds, or with T=5? I just ran it with n=1 and T=65 in 12 minutes. BTW, I checked my previous run, and found that the maximum amount of borrowing is less than 500. So, I can trim the lower end of the grid a bit.

Yes. I’d expect it even with T=65. Finite horizon problems are really really fast.

Take a look at the solution you get (like, just inspect the matrix). In your simple case, a worker at t=1 who starts at the bottom of the asset grid can consume at most w[1] + grid_A[1] * (1+r) if they choose a labor supply of 1. If you type this into the REPL, you’ll see that with the parameters you’ve chosen, this is negative. So the optimization routine works really hard to search all over the place just to conclude that there’s no way to avoid returning -Inf .

One way to fix this is to check this condition manually in the code. Something like

for s in 1:length(grid_A)
    for i in 1:n
        initial_x = [A1dict[i, s, t+1], 0.0]
        if w[t] + A_grid[s] * (1+r) < 0
            # Skip the optimization and set it to -Inf
        end
     end
end 
1 Like

Yep, now down to 8 minutes!

Your post is primarily about performance so my simple advice is: Jupyter and scripts are great for prototyping stuff, but before you ever calculate the speed of anything, you should move everything immediately into functions. It is easy: you can just write your code at the top level and when you are happy with it, wrap it in functions and global variables are suddenly local.

After that, the most important thing is to ensure you aren’t using accidentally using any abstract types. Named tuples can be a way to make those accidents impossible and have no performance overhead.

You are right. I thought it said Integer instead of Int. I often forget that Int is concrete and identical to Int64.

But my point general remains that it would have been very easy to accidentally type in a Real or Integer there. I find this kind of mistake all over code of intermediate programmers. The code for parameterized structures also gets significantly trickier if you want to allow different number types for each field.

To be clear, this is only indifferently about performance. Correctly parameterized structs have the same performance as named tuples. It interacts with performance only in that it may make mistakes which drop performance less likely (i.e. it is easy to incorrectly parameterize structs)…

This is partially a question of how advanced a programmer you are and what you are doing with it: (a) if you are writing a package, then you want to start and end with a struct under all circumstances; (b) if you need to dispatch on different types of parameters with the same function name, then structs allow you to use Julia’s multiple dispatch. But while everything in Julia is built on multiple dispatch, most user code doesn’t exploit it directly.

If you are just writing “scripts” rather than packages or big projects, and don’t consider yourself an advanced programmer, then named tuples tend to be much easier to read and much less prone to accidentally picking the wrong types for your structures. I have been using Julia a while and I still strongly prefer named tuples for non-package code.

In packages: often. In user code and “scripts”, generally not as far as I have seen. If you forget to type in a parameter to a named tuple the error would clearly show it when you unpack it. The @with_kw I showed above makes this even less likely by creating a factory with defaults.

There is also the issue that structs are often created with ordinal positions, which hides a lot of bugs. If you go params = DP(0.1, 02) you can easily mix up the order, whereas with named tuples that doesn’t occur. There are macros for named argument constructors of structs, but that is getting advanced.

I find there is a tradeoff in terms of experience. If you an reasonably advanced programmer then you get very good at mentally parsing and throwing out the extra notation of all the type declarations. I find that the easiest way to avoid bugs for beginners is to have short, clean code without any syntactic noise. Less code = easier to spot deviations from the math. Named tuples, duck typing, and the complete absence of type declarations in code is a good way to achieve that (and it has no performance penalties).

But reasonable people can disagree on the right approach to get started and make code easy to read/debug.

That works, but I also like to use the Parameters package for this. It makes it idiot proof and compact. There is the @unpack function and the @with_kw as well. I should point out that these are great for working with struct as well as named tuples.

using Parameters
function f(params)
    @unpack fun, x = params # easy to add ones & no worries about messing order
    return fun(x)
end

make_params = @with_kw (fun = sin, x = pi) # optional but adds default values

f(make_params()) # with defaults
f(make_params(fun = cos)) # change only one of the defaults
f( (fun = sin, x = pi)) # can also just create them on their own.

There is one more general piece of advice I find for making code readable and avoiding errors and bugs, and that is to use similar whenever possible. That is,

Vdict = Array{Float64}(undef, (n, length(grid_A), T))
Cdict = similar(Vdict) # same size and type, thought you can swap either.
1 Like

IMO the best approach is a struct written with unconstrained parametric types, especially while still prototyping. I never “accidentally” write Real, as I rarely write out actual types for fields.

I’m just posting this to report on my experience. After profiling the code, I found that most of the time was spent inside the routines in Optim. I tried using a “bare bones” Nelder Mead algorithm and eureka! the model now runs in 5 seconds. I could see some type-instabilities with @code_warntype in Optim’s optimize and I think that was the culprit of the slowdown. Thank you all for your help!

Thanks for reporting your experience. This is very informative and useful.
Would you mind posting the latest version of your Julia code (the version that runs in 5 seconds)?
I am finding that the April 6 version posted is slower than the April 5 one (!)
I am guessing I am not implementing all the changes that were suggested from April 6 onwards in the correct way.
I am a newbie in Julia and this will be very helpful to get familiar with it.
Thanks!

It will be in my GitHub shortly.

Hi @amrods, I’m currently trying to implement a lifecycle model in Julia and am trying to speed up the solution. You mentioned that using a bare bones Nelder Mead algo improved performance, but on your github, it looks like you use Optim.jl’s Golden Section algo and Conjugate Gradient. Do I take from this that Optim.jl’s algorithms performed better than your Nelder Mead implementation? [ps. thanks for sharing your VFI code on github!]

I think at some point I deleted that bare bones Nelder Mead implementation. In the ones I have in GitHub I used Optim’s methods that require derivatives (even when not explicitly specified). The bare bones implementation just uses the Nelder Mead algorithm in Algorithms for Optimization by Kochenderfer and Wheeler (also available in my GitHub and theirs).

Thanks. So in your experience, you found that Optim’s derivative-based methods were faster than Nelder Mead?

Yes, at least for that simple problem. I found that Nelder Mead would not make steady progress some times. I suppose that can be addressed by tuning the algorithm’s parameters, and I remember I played with that a little. However, I stopped when I got the parallel version using Optim to less than 5 seconds.