Three state variable

Edit1: Sorry i found lens next to avatar ^^

Hello

Please can I ask newbie q, i need to make signum variable from another variable.
(tried things like type Int in variable type / sign function in constrains but get errors from solver/constrains)

But i am failing even with 2 state (0,1) variable:

using JuMP
using Ipopt

model = Model()
@variable(model, d)
@objective(model, Max, d )


@constraint(model, d^2 == d)

set_optimizer(model, Ipopt.Optimizer)
optimize!(model)


value(d)

EXIT: Optimal Solution Found.
d=0.0

But d can be 1 :-/

How is possible to do sign like behavior?

Edit 2:
tried

model = Model()
@variable(model, d)
@variable(model, x)

@objective(model, Max, d )

@constraint(model, (x)  == 4.65)

#@constraint(model, abs(x)  == d*x)
@constraint(model, d*x == abs(x))

ERROR: MethodError: no method matching abs(::VariableRef)

Thanks

I think Ipopt does not guarantee optimal solutions, and abs is not a valid function to call over a JuMP variable. It seems to me that you should look on Quadratic or NonLinear formulations. Usual Mixed-Integer Linear Programming does not support multiplying one variable by another. You may need to use special macros, as pointed out here.

EDIT: And, as usual, I forget: Welcome to our community! :sweat_smile: :confetti_ball:

2 Likes

Thank you very much :slight_smile: it explains alot and helps forward.

Edit: Thanks alot. I feel very good about julia jump and forum. :slight_smile:

Edit2: Thanks, I tried it via macros its cool syntax :grin: but not getting optimum again with Ipopt :rofl: but very, very good way/syntax of modeling

It would help if you could explain in more detail what you are trying to achieve. It sounds like you want a binary or integer variable:

model = Model()
@variable(model, x, Bin)  # x in {0, 1}
@variable(model, 0 <= y <= 2, Int)  # y in {0, 1, 2}

but note that Ipopt does not support discrete variables.

However, you can use Juniper as an MINLP solver: GitHub - lanl-ansi/Juniper.jl: A JuMP-based Nonlinear Integer Program Solver

2 Likes

@odow Thanks alot, i will try that way :slight_smile:
I wanted to create model with constraints, containing sign of (vector1* x1+vector2* x2) , x1 x2 scalars real

Take a read of Please read: make it easier to help you.

In particular, take the time to make a minimal working example (i.e., code that we can copy-and-paste) that demonstrates what you are trying to achieve and why.

If vector1 and vector2 are vectors, but x1 and x2 are scalars, the result is a vector, so computing the sign doesn’t make sense.

If your model is linear, you can use a MILP reformulation:

model = Model()
L, U = -100, 100
@variable(model, L <= x <= U)
@variable(model, y, Bin)
@constraint(model, x <= U * y)
@constraint(model, x >= L * (1 - y))
@expression(model, sgn, 2 * y - 1)

If your model is nonlinear, you may be better off with a smooth approximation:
https://math.stackexchange.com/questions/1264681/how-to-smoothly-approximate-a-sign-function

1 Like

Thanks, i did look/will try to implement
:broccoli:
minimal working example is
I wanted just maximize sum( sign(data1[i]*x1+data2[i]*x2) * data[i]
for i=1:length(data) )

(or can be forced from sign to for only 1,-1 (without 0

data,data1,data2 - vectors of reals around 0
x1,x2-real scalar (values to find )

using JuMP
import Ipopt

    n = 100

    data = randn(n)
    data1 = randn(n)
    data2 = randn(n)


model = Model(Ipopt.Optimizer)
    @variable(model, -10<=x1<=10) # just to speed up
    @variable(model, -10<=x2<=10)

    @variable(model, -1<=d[1:n]<=1, start = 0.0)



@NLconstraint(model, [j = 1:n], (data1[j]*x1+data2[j]*x2)*d[j] ==  abs(data1[j]*x1+data2[j]*x2)  )	## this allow x1 x2 zero while d[j]==1, so fix next cnstr

@NLconstraint(model, [j = 1:n], abs(x1) >= 0.00000001)	# force nonzero x1
@NLconstraint(model, [j = 1:n], abs(x2) >= 0.00000001)  # and x2 nonzero

@NLconstraint(model, [j = 1:n], abs(d[j]) == 1) # 1 or -1

  @NLobjective(
        model,
        Max,
        sum( (data[i])*(d[i]) for i = 1:n)
    )


optimize!(model)

value(x1)
value(x2)

objective_value(model)

value(d[1])
value(d[2])
value(d[3])

is returning EXIT: Problem has too few degrees of freedom.
But with little reformulation it didnt return optimum values of x1, x2.

I will try suggestion for Int, Bin types and fixing error currently getting from Juniper :grinning_face_with_smiling_eyes:

I haven’t run this code, so there may be bugs, but this should get you on the right track. Your model is can be formulated as a mixed-integer linear program

using JuMP, Cbc
function main(data::Vector, data1::Vector, data2::Vector)
    N = length(data)
    @assert length(data1) == length(data2) == N
    # Make `M` sufficiently large so that it is bigger than |y|
    M = 10 * (maximum(abs.(data1)) + maximum(abs.(data2)))
    model = Model(Cbc.Optimizer)
    @variables(model, begin
        0 <= x_pos[1:2, [:p, :-]] <= 10
        z[1:N], Bin
    end)
    @expressions(model, begin
        x[i=1:2], x_pos[i, :p] - x_pos[i, :-]
        y[i=1:N], data1[i] * x[1] + data2[i] * x[2]
    end)
    @constraints(model, begin
        [i=1:2], x_pos[i, :p] + x_pos[i, :-] >= 0.000001
        [i=1:N], y[i] <= M * z[i]
        [i=1:N], y[i] >= -M * (1 - z[i])
    end)
    @objective(model, Max, sum(data[i] * (2 * z[i] - 1) for i = 1:N))
    return model
end

The Mosek modeling cookbook contains a number of these reformulations: 9 Mixed integer optimization — MOSEK Modeling Cookbook 3.3.0

2 Likes

Thank you very much, that solver/whole code :four_leaf_clover:, looked at book :slight_smile:
I will somehow try to get scalar values x1 and x2

Edit:

N = length(data)
@assert length(data1) == length(data2) == N

# Make `M` sufficiently large so that it is bigger than |y|
M = 10 * (maximum(abs.(data1)) + maximum(abs.(data2)))
model = Model(Cbc.Optimizer)
@variables(model, begin
    x[1:2]
    z[1:N], Bin
end)
@constraints(model, begin
    [i=1:N], (data1[i] * x[1] + data2[i] * x[2])*(2 * z[i] - 1) >= 0
end)
@objective(model, Max, sum(data[i] * (2 * z[i] - 1) for i = 1:N))

ERROR: MOI.ScalarQuadraticFunction{Float64}-in-MOI.GreaterThan{Float64} constraints are not supported and cannot be bridged into supported constrained variables and constraints. See details below:

“Usual Mixed-Integer Linear Programming does not support multiplying one variable by another.” ok :grin:

:broccoli:
Can I ask, how to extract values “x1,x2” values from model you posted? I tried by it didnt end well.

Edit:
There were solver info at optimize:
Pre-processing says infeasible or unbounded

What helped

  • I found data with values less “e precision” then 0.000001 were acting very strange (0.000001 was behaving ok, 0.0000001 was not) - avoid too small numbers … (i only guess there may by similiar issue at upper side

  • making “tighter” “intervals” (at variables, constraints ) helped solver to achieve computing solution

  • if solver can make unlimited kind of “decision”, he will :joy: (need to restrict it

  • OR decision “for intervals” can be modeled via cookbook mentioned before as “gap” at 9.1.10

Thanks for book is really helpful :four_leaf_clover:

Your code is not the same as mine. You are missing the reformulation of x into the positive and negative parts ( x[1] == x_pos[1, :p] - x_pos[1, :-]), and you are missing the big-M constraints.

In my code, you can get the value for x1 as value(x[1]).

2 Likes

hello
code could be simplified to

    @constraints(model, begin
[i=1:N], 0.000001-M*(1-z[i])<=(data1[i] * x[1] + data2[i] * x[2])
[i=1:N], (data1[i] * x[1] + data2[i] * x[2])<=-0.000001+M*z[i]
    end)

(i think your code got a little feature, for x[1]=0 and x[2]=0 allowed z[i] 0 or 1)
(i just add 0.000001 value to prevent it

but i found, for N=100 data, it took 3 secs. for N=300 46 secs to found optimal solution
I need to calculate for N=5000 and N=50 000 :grin:

is there some way how to calculate N= 5000 or best 50 000 :man_farmer: :broccoli:
(some trick in reformulation ?..or rounding) thanks alot

working code

using JuMP, Cbc

    n = 50	#500 is problem

    data = randn(n)
    data1 = randn(n)
    data2 = randn(n)



 N = length(data)
    @assert length(data1) == length(data2) == N
    # Make `M` sufficiently large so that it is bigger than |y|
    M = 10 * (maximum(abs.(data1)) + maximum(abs.(data2)))
    model = Model(Cbc.Optimizer)
    @variables(model, begin
        -10 <= x[1:2] <= 10
        z[1:N], Bin
    end)

    @constraints(model, begin
[i=1:N], 0.000001-M*(1-z[i])<=(data1[i] * x[1] + data2[i] * x[2])
[i=1:N], (data1[i] * x[1] + data2[i] * x[2])<=-0.000001+M*z[i]
    end)
    @objective(model, Max, sum(data[i] * (2 * z[i] - 1) for i = 1:N))


optimize!(model)
print(objective_value(model))

(i tried lower/upper M…i tried tighter “x” @ variable bounds

(i tried to install all milp solvers via jump, but found only two: Cbc, GLPK installed without errors … but same, GLPK running minutes without solution )

(i looked at parameters but it didnt improve
*set_optimizer_attribute(model, “threads”, n) *
set_optimizer_attribute(model, “cuts”, “off”)
set_optimizer_attribute(model, “sec”, n)