# Convert Non-linear objective function from Maximization to Minimization

Hello!

I have the following non linear objective function. How can I convert this maximization objective function to a minimization problem?

I have read conflicting material online, some say to multiple the problem by -1 others say to multiply by the reciprocal.
Overall, I like the decision variable not to be in the denominator as well. Represented in Julia as

``````@NLobjective(model, Max, sum(b_dict[r] * (p_dict[r] * l_star[r] / l_[r]) for r in od))
``````

where `b_dict[r]` , `p_dict[r]` and `l_star[r]` are lists of integers (representing demand, priority, and known arc length)

and `l_[r]` is a decision variable defined as `@variable(model,l_[od] >= 0) `

The most usual practice is to multiply by -1. That has the advantage of not introducing complications that my lead to unexpected gradients/Hessian.

Got it,

It is possible to multiply the reciprocal as well though, correct?

Consider maximizing -x^2. The reciprocal is -1/x^2. This has no minimum, and the derivative at x=0 is undefined. For a problem like this, a gradient-based optimizer will encounter problems. On the other hand, if you just multiply by -1, you won’t have any trouble.

5 Likes