I’m writing a dynamic optimization algorithm for an economics application with search and matching and unemployment. The algorithm is structured as two functions. There’s an outer function that iterates until finding a fixed point in the optimized value functions. It feeds a current guess for the value functions to an inner function that computes optimal policies for control variables and updates the value functions.
The inner function will have several returns (all multidimensional arrays) corresponding to optimal policy functions, value functions, and derivatives of the value functions with respect to endogenous state variables.
Currently there are 14 separate returns.
Is there a way to organize these returns so that when calling the function, one doesn’t need to keep track of the ordering of the returns?
My first thought was to use a named tuple to contain all these returns, but then I thought the immutability of named tuples might cause problems because I need to call the inner function a large number of times and have its returns mutate a variable in the outer fixed point function.
Performance is a key concern because this algorithm is part of a very large and computationally intensive model.
About the named tuples: Do you need to mutate the contents of the arrays, or will you need entirely new arrays? Because the former is still possible, even if the array is in a named tuple. See e.g.
mytuple = (arr=[1,2,3], scalar=1.0)
mytuple.arr[1] = 2 # works
mytuple.arr = [3,4,5] # does not work
mytuple.arr .= [3,4,5] # also works
mytuple.scalar = 2. # does not work
resize!(mytuple.arr, 2) # even this works!
Thanks for the speedy reply! I would need to mutate all of the elements in the arrays. Your code snippet suggests this is possible by using something like mytuple.arr .= newarr. How would I work this into a function call though? Would I pass in the tuple with arrays to be mutated. Something like:
function !innerfunc(args, mytuple)
newarr = dostuff(args)
mytuple.arr .= newarr
end
Is there a way to do it without requiring the tuple to be one of the function arguments?
Also, any thoughts on pros/cons of this approach compared to the mutable struct idea in your previous post?
One challenge I think would be that I am using a trick to allow me to change the precision of all floats in the model. I declare a constant prec with the type of precision I want (e.g., const prec = 1f0) and then use fill, oftype(), typeof(), etc., to ensure that all other floats in the model have that same precision as prec. I imagine it would be tricky to implement that trick for the mutable struct.
I think you should use an immutable struct for your case. You have enough fields that it would be nice to have it an actual type. Named tuples are better for simple code. These can be immutable for the same reason the named tuple fields could be. You modify the existing arrays, rather than assigning a new one. Make a constructor that creates empty values of the right sizes.
A few idiomatic points.
What you are talking about here is some sort of inplace solutions on a preallocated structure. So maybe name your type accordingly.
by convention, you add the ! to the inplace solution function at the end, not the beginning.
by convention you make the mutated argument the first one, so I would call it
results = ResultTyoe(???) # do you need sizes?
# then in loop, etc....
innerfunc!(results, args)
But… Make 100% sure this is all worth it. Preallocating and doing things inplace often doesn’t help until things get large, and can hurt performance otherwise.
function innerfunc!(args, mytuple)
mytuple.arr .= dostuff(args)
end
which might save you some allocations (not sure if the compiler would just turn both versions into the same thing anyway).
Re pros / cons: I concur with @jlperla that in this case you have enough fields to make it an actual type. Also, if you don’t need to change the fields of your struct, but only the contents of arrays an immutable struct is probably better.
Re passing in the tuple: If you want to preallocate your arrays, there is no way around passing them into your function.
Okay thank for the advice everyone. I’ll go with an immutable struct.
For one of the steps in the algorithm, I’ll need to compare old and new values of some of the arrays to check convergence. Initially, I was thinking this would be done in the outer function, but it seems like to implement that, I would need to add additional fields to the struct (e.g., arraynew and arrayold) which would seem to increase memory allocations.
Instead I could check convergence in the inner function before mutating the arrays and have the inner function return a boolean indicating convergence.
I think that design starts coupling the inner and outer parts of the loop a little too much.
You could just allocate two of these at the beginning, then have a prev_results in the outer loop of the same type, and just assign into it before calling the inner loop, etc
My initial concern was with economizing on memory allocations and storage space more generally.
When I run the model at full scale and with 64 bit floats, the arrays will be on the order of 650mb each.
But I realized when writing code this morning that even doing the comparison within the inner function requires holding “old” results in a temporary array so there’s really no way to avoid those allocations and, as you suggest, it’s a cleaner design to have the comparison in the outer function.
Those sound large enough for it to matter. Here is a contrived example for some of the inplace tricks:
using Parameters, LinearAlgebra
struct MyResults{T}
x::Array{T,1}
A::Array{T,2}
end
MyResults(N) = MyResults(Array{Float64,1}(undef, N), Array{Float64,2}(undef, N, N))
#Support for inplace copying.
import Base.copy!
function copy!(dst::MyResults, src::MyResults)
dst.x .= src.x
dst.A .= src.A
end
# By convention, name has ! to denote mutating, and mutate first argument
function calculate_results!(results, val, params)
@unpack N, b, C = params
B = rand(N,N) # Contrived. Assume complicated
lmul!(val, B) # val * B -> B inplace, no allocation
mul!(results.A, B, C) # B * C -> results.A
ldiv!(results.x, factorize(results.A), b) # x = A \ b inplace
end
# Some iterative algorithm
function iterate_values(vals, params)
@unpack N = params
# preallocate
results = MyResults(N)
prev_results = MyResults(N)
norms = similar(vals, length(vals) - 1)
for (i, val) in enumerate(vals)
calculate_results!(results, val, params)
if i > 1
norms[i-1] = norm(results.x- prev_results.x)
println("|x_new - x_old| = ", norms[i-1])
end
copy!(prev_results, results)
end
end
Hopefully that is helpful. Note that I also tried to point out a few of the other things such as using inplace versions of the linsolve, multiplication, etc. Those may or may not be useful in your specific case, but when things get large you should always think about those sorts of things.