JuMP-Ipopt NLP : Gradient not passed to solver

I have implemented a non-linear (8 variable) unconstrained optimisation problem using JuMP and Ipopt solver. User defined functions were used used for calculating the function value and gradient. The implementation was successful. But now when I am using the same framework for an advanced problem (time consuming and long, but similar), the gradient are not being passed to the optimiser (it shows zero gradient with respect to each parameters in the first iteration and exits). I have confirmed that function is generating non-zero gradient values. Can someone tell the possible problems in the implementation?

Let me know if some other information is required.

Can you share any code that demonstrates the problem?

Oscar

Hey Oscar,
Thanks for offering the help. I found out that my code was generating gradient with respect to design variables (an array of 8 var, and not with respect to the three parameters) but JuMP/Ipopt required an gradient vector of 11 (8 var + 3 parameters). I don’t understand this, why is it required to pass gradient with respect to fixed parameters?

All I changed was from,
storage=[#an 8 element array#];
to
storage[1:8]=[#an 8 element array#];

Printing storage variable gave a 11 element array as output where last 3 elements were 0.0

Thanks

Hi Manish,

I can’t be of much help without seeing the code. If your example is small, you can post it in triple back-ticks . i.e:
```

[code goes here]
```

otherwise you could post it as a gist.

Cheers,
Oscar

Hello Manish,

Why not use JuMP to pose your model? This way you do not need to worry about gradient calculations - JuMP will take care of it for you.

IPOPT greatly benefits from exact Hessian information. This is another reason to use JuMP to pose the model.

Alex

1 Like

Hey Alex,
I am indeed using JuMP to pose my optimization problem. But I am using a non-linear CFD solver for calculating objective function value and thus cannot use Autodiff option, if this is what you meant? Even if there is a finite difference option (of which I am not aware), it will be very time consuming as no. of design variable is 8 (in above problem, can go upto 11). I have to compute adjoint solution and project them on the design parameters to obtain gradient information. and finally, I am passing gradient information from a user designed function.

Thanks
Manish

Hello Manish,

Ah, I have worked on similar PDE-constrained problems in the past, in which we integrated adjoint equations to obtain derivative information. This was several years ago. We used CVODES + ADOL-C + Ipopt (C interface). Does your problem have any additional (algebraic) constraints besides variable bounds?

To clarify my comment, I have observed that with only exact first derivative information (and a BFGS update for the Hessian), Ipopt can take significantly more iterations to converge. You might have better luck with an SQP method, especially if your problem only has 8 to 11 decision variables. But it appears your original question / issue might be with the JuMP interface.

Can you post the simple example you have that works?