data = [2;3;4;-1;0]
model = Model(with_optimizer(GLPK.Optimizer))
@variable(model, x[1:length(data)], Bin)
@objective(model, Min, sum(data.*x))
optimize!(model)
typeof(value.(x))
I would expect the type of x to be Array{Bool,1} but I get Array{Float64,1} instead. Is there a reason behind using Floats instead of Bools for binary constraints?
Yes. Almost all solvers implement binary (and integer) variables in floating point. They check integrality against a tolerance (e.g., IntFeasTol in Gurobi).
Your can (and should expect to) obtain solutions like 1.000001 that are interpreted as “binary” by the solvers.
The most common approach to solving these (NP-hard) problems is to relax the integrality, solve the continuous relaxation, and then successively partition the search space until the relaxed problem has an integer solution. I’m guessing you don’t have a background in integer optimization, so you may want to read up on the following: