I am new to optimization and I have a question for you:
these might be naive questions but when I register a nonlinear function I have two options:
- to provide JuMP with the first two derivations
- to use the auto differentiation
Why isn’t it necessary to provide JuMP/Gurobi with derivatives of a regular linear function?
For a linear problem, does the solver even need derivates?
As far as I know, the regular simplex algorithm doesn’t use derivatives, but I am not sure.
For linear programming (LP) a variety of techniques may be used, including the simplex and interior point methods. Gurobi can use both as far as I am aware (see here). As for solving LPs, derivatives are not needed when using the simplex method. Essentially the method moves from vertex to vertex in the feasible space until it cannot improve the objective anymore. This is a mostly algebraic procedure, e.g. see here.
However, finding gradients of linear or even quadratic functions is really simple so if they are required they can likely be calculated automatically exactly without much fuss. I hope this clears it up!
To provide a bit more context:
- JuMP (and MOI) have a special type for linear functions (
ScalarAffineFunction) and quadratic functions (
- If you know a function is linear, then for
f(x) = a' * x + b, it’s trivial to write
∇f(x) = a! Derivatives for quadratic functions are also easy
- Some solvers, such as Ipopt, need derivatives.
- However, you don’t need to provide the derivatives because we compute them.