I have a mixed-integer optimization problem with two variables: A binary integer, x
, and a float, P_G
I was a little surprised that the value matrix of the binary integer variable was of type float64 and not int64:
julia> value.(x)
10×100 Array{Float64,2}:
0.0 1.0 1.0 0.0 0.0 -0.0 0.0 … 1.0 1.0 1.0 1.0 1.0 1.0
...
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 1.0 1.0
This is immediately no problem, but I also get “actual floats” in between (e.g. values > 0 and < 1):
value.(x)[value.(x) .>0]
489-element Array{Float64,1}:
1.0
1.0
1.0
1.0
0.9999999999999999
1.0
...
Should I provide further inputs for my solver or is this unintended performance?
My model is of a simple “unit commitment” problem:
# Data
P_C = [50 200; # Power capacity
25 250;
75 300;
100 400;
125 500;
150 600;
175 700;
200 800;
225 900;
250 1000;]
P_D = LinRange(0, sum(P_C[:,2]), 100) # Power demand
F = rand(100:500,10) # Random prod. prices
T = length(P_D) # Number of time steps
N = length(P_C[:,1]) # Number of generators
# Model
m = Model(CPLEX.Optimizer) # Model
@variable(m, x[1:N,1:T], Bin,start=0) # Unit activation
@variable(m, P_G[i=1:N,1:T]) # Power generation
for i in 1:T # Load balance
@constraint(m, sum(P_G[:,i]) == P_D[i])
end
for i in 1:N # Unit generation limit
for j in 1:T
@constraint(m, P_C[i,1]*x[i,j] <= P_G[i,j])
@constraint(m,P_G[i,j] <= P_C[i,2]*x[i,j])
end
end
@objective(m,Min,sum((P_G[:,1:T].*x[1:N,1:T]).*F[1:N])) # Objective function
optimize!(m) # Solve