Adding a simple constraint in Optim.jl optimizer

I have a simple optimizing code that I would like to add a constraint to. I think it may fall under nonlinear optimization (so potentially NLopt.jl) but I have no idea.

The function is defined as the sum of squared differences.

f(p) = sum((data_vector .- (p[2])*x_values.^p[1] ).^2 
optz = optimize(f, p0; autodiff = :forward)
println(Optim.minimizer(optz))

so I am trying to estimate p[1] and p[2] here. I would like to add a constraint to p[2] such that p[2] < 1.

It seems easy enough, but I am not sure how to really do it. Do I need “boxed constraints”?

I don’t know the proper way to do this, but you could try:

f(p) = sum((data_vector .- (1-p[2]^2)*x_values.^p[1] ).^2

And, of course, initializing p0[2] to what used to be sqrt(1-p0[2]), and doing minimizer[2] = 1-minimizer[2]^2 at the end.