In this case, my objective function is obj(x,y), α(x) is
another generic function called by obj(x,y).
I received an error message that mainly says:
*MethodError: no method matching (::Float64, ::Nothing)
It seems like the definition problem in α(x). How can I fix this error?

First: Lnom, xLdf, yLdf are not defined in your example, but I am assuming you have them somewhere.
Second: your α doesn’t give any value if x is smaller than Lmin or larger than Lmax. You start the optization with x=0.5 <Lmin=0.8, so it will not run. Even if you start somewhere else, like x=1, then Ipopt may run into trouble when solving the problem.

This runs:

using JuMP, Ipopt
const R0 = 0.1803
Lmin, Lmax = 0.8, 3.5
Lnom = 2.0
w1 = 0.4
w2 = 0.6
Rfc1 = 0.1803 + 0.01
Rfc2 = 0.1803
L11, L21 = 2.1, 2.1
Dindf = repeat([250], 2)
Ldf = [5, 4]
function α(x)
a = 0.0019727939
b = 0.0078887
α_0 = 0.006226
if Lmin <= x < Lnom
return a * (x - Lnom)^2 + α_0
elseif Lnom <= x <= Lmax
return b * (x - Lnom)^2 + α_0
else
return 0.0 ##New part here
end
end
function obj(x, y)
f0dL = (w1 * (Ldf[1] - L11)^2 + w2 * (Ldf[1] - L21)^2) * 1 * 3600
return (w1 * α(x)) * Dindf[1] + f0dL
end
model = Model(Ipopt.Optimizer)
lb = maximum(Lmin ./ Ldf)
ub = minimum(Lmax ./ Ldf)
@variable(model, lb <= x <= ub, start = 1.0)
@variable(model, lb <= y <= ub, start = 1.0)
@NLconstraint(model, x + y == 1)
register(model, :obj, 2, obj; autodiff = true)
@NLobjective(model, Min, obj(x, y))
optimize!(model)

(I created the missing values)

I think you α needs work outside Julia. How is it possible to require x>=0.8, y>=0.8 in α(x),α(y) and then enforce constraint x+y=1?

Hi, thank you very much for the detailed checking.
I am sorry for the mistake in giving example. I have tried to correct the example.
For the point you mentioned that the start point is not defined in α(x),α(y) this is due to
my mistake in writing the example (It is because the values I defined are wrong, **the function ** form is actually what I needed).

I have checked your example. I think you are right, it is
because for the simulation case when α(x) returns nothing it
will go into an error. I will carefully check my constraints settings.

Can you help to give some advice on defining the function α(x) here:
In my program I need x to satisfy: Lmin <= x <= Lmax, outside of
the range is not considered (due to my problem settings). Is it needed to
assign some values for an outside range like you did set them to 0.0?

Yes, I think your α should always give a result. Ipopt may temporarily violate constraints when solving your problem, so α should be able to give a value for x even if x is not inside the required range. I don’t know if zero is the right answer, though. Maybe this topic will be of use?

This sounds like a bad solution, your \alpha function would not be continuously differentiable. You should introduce the correct bounds on your variables (that is, if \alpha is defined for x \in [L_{min}, L_{max}], then x should be in [L_{min}, L_{max}].)

Thanks Cvanaret. Yes, I see your point and agree.
But my optimization is from practical usage, actually
each part of the expression in the α function (i.e. [Lmin, Lnom] and [Lnom, Lmax])
is fitted from some data. So it is not likely able to be expressed with one formula over [Lmin, Lmax].

Yes, I see that \alpha is defined piecewise. But this doesn’t change the problem: if \alpha isn’t defined for some values of x, then x shouldn’t take these values.

In the definition of \alpha, you can have some partitions (ie [L_{min}, L_{nom}] or [L_{nom}, L_{max}]) over which you fitted some models, that’s fine. What matters is the overall domain of \alpha.

Yes. I agree. For the last point you mean if my problem limits x within Lmin
to Lmax, then I should not define α(x) for outside of this domain (e.g. x < Lmin)?

Yes, the domain of \alpha and the bounds of x should be consistent. Otherwise, it makes no sense: why would you choose a value for the optimal x at which \alpha is not defined?

I see. For constrained optimization, it is not easy,
especially when the objective function is related to
a piecewise-defined function.
For the case as said by Blob, if it raises an error because
objective return nothing, then maybe it is needed to
construct some values for undefined domain, maybe,
could be something like adding a penalty to the outside of
function original definition domain.

The immense majority of solvers satisfy bound constraints at each iteration. So if you provide the correct bounds for your variables, it will never happen that x lies outside of its bounds, by extension it will never happen that \alpha is undefined.

I tend to see “undefined domain” errors as a modeling flaw. Most of the time you can reformulate your problem to avoid out-of-domain evaluations.
For example, the function \sqrt{1 - k(1 - x^2)} is not defined when the radical is negative. Instead of penalizing this scenario in the objective function, introduce a new variable y \ge 0 and a new constraint y = 1 - k(1 - x^2), and use the quantity \sqrt{y}. Since the bound constraints are satisfied at each iteration, it will never happen that the square root is evaluated at a negative number (differentiability at y = 0 is another problem ).