# Help: How to have my function named penalty_method run?

I am trying to get the penalty function to run using a given optimize function at the top of the code. I want the penalty_method function to run for a certain amount of iterations (n=1000, so i=500), but I cannot figure out the structure. f, g, c, and x0 are given by the function optimize, so I need to integrate my code to run penalty_method to converge on a minimum. There are two constraints given as a vector by the function optimize, c1 and c2. Am I correctly calling them by saying c(x)[1], and c(x)[2]? In the end, I want the value x’ (the optimized x) extracted from the function optimize.

function optimize(f, g, c, x0, n, prob)
x′= x0
# y_best=f(x')
i=1
R = penalty_method(f, p, x, k_max; ρ=1, γ=2)
while i <= n/2 #fc + 2*gc
i+=1
x' = penalty_method(f, p, x', k_max; ρ=1, γ=2)
end
return x′
end

function penalty_method(f, p, x, k_max; ρ=1, γ=2)
for k in 1 : k_max
q1 = c(x)[1]
q2 = c(x)[2]
p = x -> f(x) + ρ*(sum(q1>0)+sum(q2>0))
x = powell(x -> f(x) + ρ*p(x), x)
ρ *= γ
if p(x) == 0
return x
end
end
return x
end

function powell(f, x, ϵ)
n = length(x)
U = [basis(i,n) for i in 1 : n]
Δ = Inf
while Δ > ϵ
x′ = x
for i in 1 : n
d = U[i]
x′ = line_search(f, x′, d)
end
for i in 1 : n-1
U[i] = U[i+1]
end
U[n] = d = x′ - x
x′ = line_search(f, x, d)
Δ = norm(x′ - x)
x = x′
end
return x
end


You aren’t passing these to your penalty_method function, so it will give an error when these are called in that function unless you add them as function arguments.

If c is a vector of functions, like c = [sin, cos], then you would call them as c[1](x) and c[2](x).

1 Like

Thank you for your help so far.
I want both of these equations to be positive to add to the penalty function so would entering q1 and q2 in p inside of the function penalty_method be incorrect? I am unsure how to add them otherwise. Furthermore, my main concern is on the syntax of the function optimize section. Is the syntax and use of R correct? Essentially I want the penalty method function to run until the number of iterations =1000, and then I want it to spit out the x-value it has reached. Below is the following code that I used to say c(x)[1] and c(x)[2]

function simple1_constraints(x::Vector)
return [x[1] + x[2]^2 - 1,
-x[1] - x[2]]
end

const simple_problems = Dict(1=>(f=simple1, g=simple1_gradient, c=simple1_constraints, x0=simple1_init, n=1000)

I am currently getting the following error:
(“syntax: invalid assignment location “x’””)) for line 28 which is the first line (function optimize(f, g, c, x0, n, prob))

This isn’t an answer to your overall question but you’re getting the invalid assignment location error because

x'


is not a valid variable name. The ' symbol is used to take the adjoint of a variable. For example:

julia> x = rand(4)
4-element Array{Float64,1}:
0.1346318328288365
0.4288867990992209
0.7113093810849349
0.9144179988302221

julia> x'
0.134632  0.428887  0.711309  0.914418

1 Like

On the other hand, x′ is a valid variable name, where ′ (U+2032) is typed in the REPL by tab-completing \prime.

First, from your example, it seems that c(x) is a single function that returns a vector of the constraint values. In this case, you want:

q = c(x)


rather than calling c(x)[1] and c(x)[2], which calls c(x) twice. And, and as I said, you still need to include c as an argument to penalty_method.

Second of all, sum(q1>0)+sum(q2>0) is probably not what you want. That adds 1 for each constraint that it positive.

In many penalty-function optimization methods, like augmented-Lagrangian methods, you add the square of the constraints, i.e. f(x) + ρ*(q[1]^2 + q[2]^2) for equality constraints, or f(x) + ρ*sum(v -> v^2, q). If these are inequality constraints, then you probably want f(x) + ρ*(max(q[1],0)^2 + max(q[2],0)^2) (still squaring so that it has a continuous derivative), or f(x) + ρ*sum(v -> max(v, 0)^2, q).

You apparently have another bug in your code because you are passing x -> f(x) + ρ*p(x) to powell, when p(x) already includes f(x) and ρ.