Passing "dynamic" argument to function in Optim.jl

Hello,
I have an objective function that solves for a vector z using an iterative algorithm. I then feed this into optimize(). I can use a “static” starting guess z0 that I set before calling optimize. However, it might be faster to use the actual z obtained the last time optimize called the objective function. The idea is that often optimize is calculating numerical gradients so the new z will be very close.

I can probably achieve this using a global for z0, but I was wondering
(A) whether this can be achieved more cleanly, and
(B) if there are reasons to not do this altogether.

For this application, I can prove that z is unique (and hence the starting condition does not matter except for speed).

Here is a MWE (note, in the real application, I cannot solve z analytically as is the case here, so I do need to iterate)

function mylossfn(delta, xgrid, z0, z_data)

	z = z0
	error = 1.0

	# solve Bellman equation by iteration 
	# (in the real application there is no closed form soln)
	while error > 1e-8
		z_next = (xgrid .^ 2) .+ delta .* z

		error = sum( (z .- z_next).^2 )
	end

	# loss function
	return sum( (z .- z_data).^2 )

end

xgrid = 1:100
myz0 = zeros(100)
# read my_z_data from a data file
delta0 = 0.5

optimize((x -> mylossfn(x, myz0, my_z_data)),
		delta0,
		BFGS())

No need for globals. Just mutate the z0 argument in your mylossfunction, and then lexical scoping will ensure that optimize((x -> mylossfunction(x, myz0, my_z_data)), ...) uses whatever variable myz0 you defined in your local scope.

2 Likes

Fantastic, this is so helpful, thank you!

It seems to me that the way to do this (inside mylossfn) is

for i=1:length(z)
	z0[i] = z[i]
end

rather than the naive z0=z that I tried initially.