Ok, I tried a lot of things and this is what I found.
With your JuMP.jl model, I get:
julia> value.(x)
11×11 Matrix{Float64}:
0.0 0.0 0.0 0.0 0.0 -0.0 -0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0804598 0.173624 0.283987 0.418876
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
julia> objective_value(lp)
0.093492240953056
So, we do get some values that at not 0 or 1.
With your InfiniteOpt.jl model, I get:
julia> value(x, ndarray = true)
11×11 Matrix{Float64}:
0.0 0.0 0.0 0.0 -0.0 -0.0 -0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.5 0.5
0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
julia> objective_value(model)
0.07500000000000001
So, our choice of approximation schemes definitely affects the solution we get. It’s interesting here to see that the non 0 or 1 values are exactly 0.5.
Now let’s see what happens when we formulate the InfiniteOpt model without the differential equation, but instead use nested integrals like your JuMP model. Having an infinite parameter as an integral bound is not a use case that we considered for InfiniteOpt, so this is not supported currently. We can hack a little bit around this and just use a manual sum like you did in JuMP.
using InfiniteOpt, HiGHS
types = 11
model = InfiniteModel(HiGHS.Optimizer)
@infinite_parameter(model, c ∈ [0,1], num_supports = types)
@infinite_parameter(model, θ ∈ [0,1], num_supports = types);
@variable(model, 0 ≤ x ≤ 1, Infinite(c, θ), start = 10)
θs = supports(θ)
@constraint(model, [j ∈ eachindex(θs)], ∫((2c - θs[j])*x(c, θs[j]) + sum(1/types * x(c, θs[k]) for k ∈ 1:j), c) == 0)
@objective(model, Max, ∫(∫((2θ - 1 - c)*x, c), θ))
optimize!(model)
Now we get:
julia> value(x, ndarray = true)
11×11 Matrix{Float64}:
0.0 0.0 0.0 0.0 0.0 -0.0 -0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.592308 0.705128 0.849434 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0283575
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
julia> objective_value(model)
0.07655678791492745
So, we are still different than the JuMP code using trapezoid rule in the remaining integrals.
Ok, let’s now force InfiniteOpt.jl to use a the same simple averaging scheme using support_sum
:
using InfiniteOpt, HiGHS
types = 11
prob = 1 / types
model = InfiniteModel(HiGHS.Optimizer)
@infinite_parameter(model, c ∈ [0,1], num_supports = types)
@infinite_parameter(model, θ ∈ [0,1], num_supports = types);
@variable(model, 0 ≤ x ≤ 1, Infinite(c, θ), start = 10)
θs = supports(θ)
@constraint(model, [j ∈ eachindex(θs)], support_sum(prob * ((2c - θs[j])*x(c, θs[j]) + sum(prob * x(c, θs[k]) for k ∈ 1:j)), c) == 0)
@objective(model, Max, support_sum(prob * support_sum(prob * (2θ - 1 - c)*x, c), θ))
optimize!(model)
Now we get:
julia> value(x, ndarray = true)
11×11 Matrix{Float64}:
0.0 0.0 0.0 0.0 0.0 -0.0 -0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0804598 0.173624 0.283987 0.418876
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
julia> objective_value(model)
0.093492240953056
Which is exactly the same since the JuMP model and the discretized InfiniteOpt model are equivalent. So, the difference in integral approximation seems to be the chief difference.
Ok, now let’s add the differential equation back in:
using InfiniteOpt, HiGHS
types = 11
prob = 1 / types
model = InfiniteModel(HiGHS.Optimizer)
@infinite_parameter(model, c ∈ [0,1], num_supports = types)
@infinite_parameter(model, θ ∈ [0,1], num_supports = types);
@variable(model, 0 ≤ x ≤ 1, Infinite(c, θ), start = 10)
@variable(model, 0 ≤ X, Infinite(c, θ), start =10)
@constraint(model, support_sum(prob * ((2c - θ)*x + X), c) == 0)
@constraint(model, x == ∂(X, θ))
@objective(model, Max, support_sum(prob * support_sum(prob * (2θ - 1 - c)*x, c), θ))
optimize!(model)
With this, now we get:
value(x, ndarray = true)
11×11 Matrix{Float64}:
0.0 0.0 0.0 0.0 -0.0 -0.0 -0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 -0.0 -0.0 -0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
julia> objective_value(model)
0.09256198347107437
Which is much closer to the JuMP solution relative to the original InfiniteOpt formulation and now it only gets 0s or 1s. So, it seems the principle difference between your formulations was the choice of integral approximation scheme.
Interestingly, we can use other approximation schemes for integrals in InfiniteOpt and we can try using a high-fidelity quadrature scheme FEGaussLobatto
:
using InfiniteOpt, HiGHS
types = 11
prob = 1 / types
model = InfiniteModel(HiGHS.Optimizer)
@infinite_parameter(model, c ∈ [0,1], num_supports = types)
@infinite_parameter(model, θ ∈ [0,1], num_supports = types);
@variable(model, 0 ≤ x ≤ 1, Infinite(c, θ), start = 10)
@variable(model, 0 ≤ X, Infinite(c, θ), start = 10)
# change default of integral evaluation
set_uni_integral_defaults(eval_method = FEGaussLobatto(), num_nodes = 5)
@constraint(model, ∫((2c - θ)*x + X, c) == 0)
@constraint(model, x == ∂(X, θ))
@objective(model, Max, ∫(∫((2θ - 1 - c)*x, c), θ))
optimize!(model)
This gets:
julia> value(x, ndarray = true)
11×11 Matrix{Float64}:
0.0 0.0 0.0 0.0 -0.0 -0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0
0.0 0.0 0.0 -0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0
julia> objective_value(model)
0.07404655597983774
Interestingly, this recovers the sample formulation if we choose num_nodes=2
, but there currently is a bug that I need to fix to run it.
Finally, if I run the JuMP model but let types=100
, we get:
julia> objective_value(lp)
0.07602375049463848
Which is close to what InfiniteOpt obtains with only types=11
.
So, at the end of day we can clearly see that our choice of approximation affects the solution we get. Given the performance of the discrete JuMP model as we increase the number of supports, it seems that the default approximation schemes used by InfiniteOpt are more accurate relative to the continuous time case.