I am not sure I understand this. I did not suggest omitting an equation, just a way to map a subset of \mathbb{R}^n to the feasible parameter space (with the constraint).
I would suspect that modern HMC methods would do much better than Gibbs (unless it is a very specialized problem). See DynamicHMC.jl, or the NUTS algorithms in Klara.jl and Mamba.jl.
We have a constrained Newton solver at ConstrainedOptim. This could work if evaluating your gradient, Hessian (and derivatives of the constraint) is not too demanding.
Otherwise, try to add a penalty to your function such as
f(x) = g(x) + lambda * sum(abs2, delta)
, and then use a continuation method. So you optimize f
for a small value of lambda
, then restart the optimization with a larger value of lambda
, starting from the minimizer you got from the first run, and then repeat.