Varying parameter in a function

I am trying to examine how the solution to an optimization problem changes as some parameters are changed in the objective. As a preliminary step, I am just trying to get comfortable modifying parameters in a function.

Here’s what I tried so far. It seems to work, but I think it may get awkward if there are many parameters or if embedded in an otherwise complex problem. Is there a better approach to this sort of thing (sorry I know that’s a bit vague)?

function f(x,y)
    return x+y
end
function g(y)
    function h(x)
        return f(x,y)
    end
    return h
end

As desired, g(2)(7) produced 9.

You’re probably better off with closures or named arguments like:

julia> y = [7]
1-element Array{Int64,1}:
 7

julia> f(x, y = y) = x + y[1]
f (generic function with 2 methods)

julia> f(2)
9

julia> y .= 5
1-element Array{Int64,1}:
 5

julia> f(2)
7

Your approach means that you have to create a new function every time the parameter changes, which incurs compilation cost - might not matter if your function is very expensive to evaluate and has low compilation overhead, but can absolutely kill you if your parameter changes a lot and the function is otherwise quick to evaluate.

2 Likes

No, closure is the right solution, don’t use global states like this which isn’t really closure. They makes everything much harder to do. Having too many parameter is a completely orthogonal problem and is why struct exist.

No this is wrong. There’s no new function produced and there’s no compilation at all for each new closure. (Assuming the same input parameter types).

2 Likes

And by this is the right solution I’m assuming that this interface works for you. If you actually must mutate something , say, halfway through the optimization, then, well, you’ll need to mutate something. A closure is still the right solution and you still shouldn’t use global variables for this but the exact syntax could be different depending on the exact user interface you want.

My apologies and thanks for pointing this out - I’m sure I ran into an issue with this before, where I had to solve an optimization problem for millions of rows in a DataFrame, where each row held parameters for the optimization problem. Initially I redefined the objective function for each row with the new parameters (which were all the same type), which was very slow, and following a discussion on Slack (which is now in the memory hole of course) I moved to putting things in containers like above which was a lot faster.

I can’t reproduce this now though:

using Optim

function main()
	rosenbrock(x, a, b) =  (a - x[1])^2 + 100.0 * b*(x[2] - x[1]^2)^2

	a = [1.0]
	b = [1.0]

	my_obj(x, a=a[1], b=b[1]) = rosenbrock(x, a, b)

	println("\n")
	println("Parameters in container")
	println("First optimization")
	@time optimize(my_obj, zeros(2), BFGS()).minimizer
	println("Second optimization")
	@time optimize(my_obj, zeros(2), BFGS()).minimizer

	println("Changing parameter value")
	a = [2.0]

	println("First optimization with new parameter")
	@time optimize(my_obj, zeros(2), BFGS()).minimizer
	println("Second optimization with new parameter")
	@time optimize(my_obj, zeros(2), BFGS()).minimizer
	
	println("\nRedefine objective function for each parameter change")
	my_obj2(x) = rosenbrock(x, 1.0, 1.0)
	println("First optimization")
	@time optimize(my_obj2, zeros(2), BFGS()).minimizer
	println("Second optimization")
	@time optimize(my_obj2, zeros(2), BFGS()).minimizer

	println("Changing parameter value")
	my_obj2(x) = rosenbrock(x, 1.0, 2.0)

	println("First optimization with new parameter")
	@time optimize(my_obj, zeros(2), BFGS())
	println("Second optimization with new parameter")
	@time optimize(my_obj, zeros(2), BFGS())
end

main()

Shows that even when redefining the objective with new parameters, there’s no compilation cost:

Parameters in container
First optimization
  1.215204 seconds (1.61 M allocations: 85.647 MiB, 2.55% gc time)
Second optimization
  0.000133 seconds (1.67 k allocations: 44.562 KiB)
Changing parameter value
First optimization with new parameter
  0.000180 seconds (2.89 k allocations: 75.438 KiB)
Second optimization with new parameter
  0.000160 seconds (2.89 k allocations: 75.438 KiB)

Redefine objective function for each parameter change
First optimization
  0.084680 seconds (100.97 k allocations: 5.498 MiB)
Second optimization
  0.000175 seconds (918 allocations: 39.281 KiB)
Changing parameter value
First optimization with new parameter
  0.000244 seconds (2.89 k allocations: 75.438 KiB)
Second optimization with new parameter
  0.000159 seconds (2.89 k allocations: 75.438 KiB)

I don’t know what you were doing before, but,

  1. The code as posted by op has no compilation cost when changing the parameter.
  2. Named (default or keyword) argument is completely unrelated to closures. And he code you posted in your first post contains no closures.
  3. There are other closets related to closure, mostly related to compiler limitation. None are related to compilation.
  4. Your test code does not show anything about compilation. You never redefined any function when running that code. You simply passed in two functions, none of which are new on entering your main function.

Sorry just to clarify I never meant to imply that what I posted above (using default args to make a one-argument function that references the containers for the parameters) creates a closure!

I’m not sure I understand your last point though - what do you mean by “none of which are new on entering your main function”? Clearly there is a compilation overhead on the first call of my_obj and my_obj2?

And I suppose I’m not using the right definition (no pun intended) of “redefine” - when I write:

my_obj2(x) = rosenbrock(x, 1.0, 1.0)
my_obj2(x) = rosenbrock(x, 1.0, 2.0)

I would refer to that as “redefine” (as my_obj2(0) would produce a different output after running the first line vs after running the second line. Clearly this isn’t a helpful way to think about it though as my other example

my_obj(x, a = a[1], b = b[1])

also gives a different output when a[1] changes, without “redefining” my_obj in the way I’ve done it with my_obj2.

Ok, in that case what I want to say is simply that there’s no point using default argument here.

No, my_obj2 isn’t a newly defined function. Compilation for it is/can be done before the first call.

None of the syntax ever used here so far redefines the function within main. All the functions are defined, and maybe redefined, before the main functions runs. Unless you are using eval, you can completely forget about recompilation cost.

But why is the first call to my_obj2 when running main() in a fresh session consistently much more expensive (~0.05-0.1 seconds) then subsequent calls (~0.0002 seconds)?

1 Like

There are many other compilation thats unrelated to your function.

@yuyichao, can you give an example of how a closure can solve @samerb’s problem. I am not clear on how to use closures well at all. Thanks.

@ctkelley

He means to use and return an anonymous function instead of a function block. The example in the julia docs show exactly the use case in the inital post.

Julia Functions · The Julia Language

Basically to do the following

f(y) = (x) ->x+y

g = f(7)

g(2)
>>> 9
1 Like

No he already have a closure. Anonymous or not does not mater at all.

Ok so based on what I saw here, I am trying a currying approach similar to what Duane_Wilson suggested. However, I don’t understand how to pair this with the nlsolve function which seems to require all arguments to be components of a vector. I wasn’t expecting this to work since there are many layers of things I don’t understand, but the code below is intended to illustrate what I’m trying to accomplish.

In words: I have the function f(x,y)=(x/x+y)*log(1.1+x+y). Then I want a function where I give it a value of y and it gives me the value of x that minimizes f(x,y) given parameter y.

curry(f,y) = (xs...) -> f(y,xs...)
function f!(F,x) # edited in response to rdeits' answer
    F[1] = x[2]/(x[2]+x[1])*log(1.1+x[2]+x[1])
end

function maxer(f,y)
    partial=curry(f,y) # partial is supposed to be a function of x[2] with parameter x[1]=y

    nlsolve(f,[1]) # this is supposed to minimize partial 
end

calling maxer(f,3) gives InexactError: Int64(NaN)

Your f(x) is trying to write into some global variable named F. You shouldn’t need to do that–just return the value of your function.

2 Likes