Numerical error when optimising expression with JuMP

I’m completely new to mathematical optimisation and would appreciate if you could give a hint here.

I have a function of three vectors x, y and z(each of size 15). Now I want to maximise a function of the vectors with constraints:

function build_model()
	model = Model(Ipopt.Optimizer)
	
	@variable(model, x[i=1:n]);
	@variable(model, y[i=1:n]);
	@variable(model, z[i=1:n]);
	
	@NLconstraint(model, sum(x[i] for i=1:n) <= cx)
	@NLconstraint(model, sum(y[i] for i=1:n) <= cy)
	@NLconstraint(model, sum(z[i] for i=1:n) <= cz)
	
	@NLexpression(model, my_expr,
    	sum(
			wx*((Ax[i] * x[i]^2) / (120 + x[i]^2 + 2y[i] - 0.8x[i] * z[i] + z[i]^2)) + wy*((Ay[i] * y[i]^2) / (30 + 4x[i] + y[i]^2 + 7z[i])) + wz*((Az[i] * z[i]^2) / (80 + 0.4x[i] + 0.2x[i]*z[i] +0.6y[i] + z[i]^2)) for i=1:n
		)
	)

	@NLobjective(model, Max, my_expr)
	
	return model
end

model = build_model();
optimize!(model)
termination_status(model)

wx, wy, wz and cx, cy, cz are some scalars. Ax, Ay, Az are arrays of size 15.

The result is NUMERICAL_ERROR::TerminationStatusCode = 20

Does the way I’m using JuMP make sense? How would go about debugging the issue?
Thanks in advance!

I can’t run your code because you haven’t provided the data.

Presumably, if you look at the log of Ipopt, it will complain about “invalid number detected.” This most commonly occurs when you have a divide by zero, and your objective function is undefined.

Add constraints or bounds on your to prevent this, or try a different starting point: https://jump.dev/JuMP.jl/stable/variables/#Start-values-1

2 Likes