but what if Fx is other arbitrary domain

I mean I don’t want the function `MyAlgorithm`

to depend on another fixed function like `create_variables`

.

JuMP doesn’t work by defining a single vector of decision variables that lives in a Cartesian product of domains. If you want something like that, you’ll need to code it yourself.

How about:

```
julia> using JuMP
julia> abstract type AbstractDomain end
julia> struct ZeroOne <: AbstractDomain; n::Int end
julia> add_variables(d::ZeroOne, model) = @variable(model, [1:d.n], Bin)
add_variables (generic function with 1 method)
julia> struct Rplus <: AbstractDomain; n::Int end
julia> add_variables(d::Rplus, model) = @variable(model, [1:d.n], lower_bound = 0)
add_variables (generic function with 2 methods)
julia> struct Zplus <: AbstractDomain; n::Int end
julia> add_variables(d::Zplus, model) = @variable(model, [1:d.n], lower_bound = 0, Int)
add_variables (generic function with 3 methods)
julia> function MyAlgorithm(Fx::Vector{<:AbstractDomain}, params)
model = Model()
x = reduce(vcat, add_variables.(Fx, model))
return model, x
end
MyAlgorithm (generic function with 1 method)
julia> model, x = MyAlgorithm([ZeroOne(3), Rplus(2), Zplus(3)], nothing)
(A JuMP Model
Feasibility problem with:
Variables: 8
`VariableRef`-in-`MathOptInterface.GreaterThan{Float64}`: 5 constraints
`VariableRef`-in-`MathOptInterface.Integer`: 3 constraints
`VariableRef`-in-`MathOptInterface.ZeroOne`: 3 constraints
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached., VariableRef[_[1], _[2], _[3], _[4], _[5], _[6], _[7], _[8]])
julia> model
A JuMP Model
Feasibility problem with:
Variables: 8
`VariableRef`-in-`MathOptInterface.GreaterThan{Float64}`: 5 constraints
`VariableRef`-in-`MathOptInterface.Integer`: 3 constraints
`VariableRef`-in-`MathOptInterface.ZeroOne`: 3 constraints
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.
julia> x
8-element Vector{VariableRef}:
_[1]
_[2]
_[3]
_[4]
_[5]
_[6]
_[7]
_[8]
```

You could also get people to pass you a vector of lower bounds, upper bounds, and whether the variable is integer.

Perhaps:

```
julia> struct Domain
lower::Float64
upper::Float64
discrete::Bool
end
julia> function MyAlgorithm(Fx::Vector{Domain}, params)
model = Model()
@variable(
model,
Fx[i].lower <= x[i in 1:length(Fx)] <= Fx[i].upper,
integer = Fx[i].discrete,
)
return model, x
end
MyAlgorithm (generic function with 2 methods)
julia> Fx = vcat(
fill(Domain(0.0, 1.0, true), 3),
fill(Domain(0.0, Inf, false), 2),
fill(Domain(0.0, Inf, true), 3),
)
8-element Vector{Domain}:
Domain(0.0, 1.0, true)
Domain(0.0, 1.0, true)
Domain(0.0, 1.0, true)
Domain(0.0, Inf, false)
Domain(0.0, Inf, false)
Domain(0.0, Inf, true)
Domain(0.0, Inf, true)
Domain(0.0, Inf, true)
julia> model, x = MyAlgorithm(Fx, nothing)
(A JuMP Model
Feasibility problem with:
Variables: 8
`VariableRef`-in-`MathOptInterface.GreaterThan{Float64}`: 8 constraints
`VariableRef`-in-`MathOptInterface.LessThan{Float64}`: 3 constraints
`VariableRef`-in-`MathOptInterface.Integer`: 6 constraints
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.
Names registered in the model: x, VariableRef[x[1], x[2], x[3], x[4], x[5], x[6], x[7], x[8]])
```

A key benefit of JuMP is that you are free to construct the most appropriate data structure for your problem. You do not need to use built-in constructs. A downside is that choosing the most appropriate data structure can be difficult.