Hello everyone,

I have been playing around with JuMP recently to understand the structure of an optimal control problem. For clarity, the problem is the following. I want to find the function x(c,\theta) such that \forall c,\theta \in [0,1]^2 x(c,\theta) \in [0,1] and x is such that

\max_{x} \int_0^1 \int_0^1\big(2\theta-c-1\big)x(c,\theta)d\theta dc

subject to the following constraint for all \theta \in [0,1]:

\int_0^1 \Big[ \big(2c-\theta \big)x(c,\theta) + \int_0^\theta x(c,t)dt\Big] dc =0

My first attempt has been to write up a manually discretized version of the maximization problem using JuMP. Interpret the integrals as expectations over the uniform distribution. Here’s a minimal working example.

```
using JuMP, HiGHS
types = 11
prob = ones(types)./types;
theta, c = [collect(0:(types-1)^(-1):1) for i=1:2]
lp = Model(HiGHS.Optimizer)
@variable(lp,0<=x[1:types,1:types]<=1)
@constraint(lp, EPICconst[j = 1:types] , sum(prob[i]*((2*c[i] - theta[j])*x[i,j]
+ sum(prob[k]*x[i,k] for k in 1:j)) for i in 1:types)==0);
@objective(lp, Max, sum(prob[j]*sum(prob[i]*(2*theta[j] - c[i] -1)*x[i,j]
for i in 1:types) for j in 1:types));
optimize!(lp);
```

This model solves very quickly and returns an optimal solution. I ran this code with various values of `types`

, up to 41, to get a sense of whether small discretizations were affecting the result. The results seem to be very consistent: the matrix representing the function `x(c,\theta)`

is pretty much unaltered by changing the `types`

variable.

Now, because this is an optimal control problem, I thought to try and use the `InfiniteOpt.jl`

package. The same problem is quite simple to formulate, once one observes that we can write \int_0^\theta x(c,t) dt = X(c,\theta), and thus \frac{\partial X}{\partial \theta}(c,\theta) = x(c,\theta).

Below a minimal working example again, this time using `InfiniteOpt.jl`

```
using InfiniteOpt, HiGHS
types = 11
model = InfiniteModel(HiGHS.Optimizer)
@infinite_parameter(model, c in [0,1], num_supports = types)
@infinite_parameter(model, theta in [0,1], num_supports = types);
@variable(model, 0<=x<=1, Infinite(c,theta), start = 10)
@variable(model, 0<=X, Infinite(c,theta), start=10)
@constraint(model, integral((2*c-theta)*x + X, c) == 0)
@constraint(model, x == @deriv(X,theta))
@objective(model, Max, integral(integral((2*theta-1-c)*x,c),theta))
optimize!(model)
```

This is even simpler to write, and observe I did not provide boundary conditions for X(c,\theta) since I do not care about the value of X, only of x(c,\theta). The solver finds a solution, I assume by discretizing according to the `types`

variable. However, the solution is different from the one found by manually discretizing and then solving with JuMP! In fact, it is worse - the optimal value of the objective is ~10\% worse than the objective of the manually discretized model.

I can’t wrap my head around it. Does the issue have anything to do with discrete vs. continuous control? Or is it something that happens inside the optimization procedure and I shouldn’t trust `InfiniteOpt`

?

Sorry for the long post, and thank you in advance!