# Vectorized Convex inequality constraint

I’m trying to vectorize an inequality constraint comparing two `Convex` types. On one side, I have `Convex.MaxAtom`s, and on the other side, I have `Variable`s. SO link here. I want to do something like the following:

``````using Convex
t = Variable(1)
v = Variable(10)
x = Variable(1)
z = rand(100)

problem = minimize(x)
problem.constraints += [t >= 0]

ccc = Convex.MaxAtom[]
for i = 1:10
c = -(1. + minimum(x.*z))
cc = t + c
push!(ccc, max(cc, 0.))
end
problem.constraints += [ccc <= v]
``````

but I’m getting the following error on the constraint:

``````ERROR: LoadError: MethodError: no method matching isless(::Complex{Int64}, ::Int64)
``````

I’m not sure where the `Int64` types are coming in. Is there a better way of adding this constraint besides looping through and adding individual comparisons like

``````for i = 1:10
problem.constraints += [ccc[i] >= v[i]]
end
``````

I’m trying to avoid this because eventually my 10 will be much larger.

Disclaimer: I don’t know Convex.
Why do you try to avoid a for loop because 10 will be larger? And why do you ned to vectorize the constraint? Specifically, is this because you believe vectorization will make code run faster in Julia (like it does in R and Matlab)?

That said, you could probably do `problem.constraints .+= [ccc .<= v]` , though as I said I don’t know Convex.

Thanks for your suggestion! I tried your method `problem.constraints .+= [ccc .<= v]` and make some progress getting rid of the `Int` error, but I end up the following error:

``````ERROR: LoadError: MethodError: Cannot `convert` an object of type Array{Convex.Constraint,1} to an object of type Convex.Constraint
This may have arisen from a call to the constructor Convex.Constraint(...),
since type constructors fall back to convert methods.
``````

To your questions, I’ve found that adding constraints sequentially with `problem.constraints += [ccc[i] <= v[i]]` in a loop is the most expensive way of adding constraints. Consider the tests below:

``````function test1(; N=100)
x = Variable(N)
p = minimize(norm(x))
for i = 1:N
z = rand()
p.constraints += [x[i] >= z]
end
end

function test2(; N=100)
x = Variable(N)
p = minimize(norm(x))
z = Vector{Float64}(N)
for i = 1:N
zz = rand()
z[i] = zz
end
p.constraints .+= [x .>= z]
end

function test3(; N=100)
x = Variable(N)
p = minimize(norm(x))
z = Vector{Float64}(N)
for i = 1:N
zz = rand()
z[i] = zz
end
p.constraints += [x >= z]
end

@time(test1(N=10000))
@time(test2(N=10000))
@time(test3(N=10000))

>>> 0.457911 seconds (140.03 k allocations: 1.124 GiB, 47.47% gc time)
>>> 0.056281 seconds (70.63 k allocations: 2.625 MiB)
>>> 0.039392 seconds (10.53 k allocations: 819.199 KiB)
``````

and calling the functions a second time, I see better performance, with a strong preference for `test3 ` (I will be running many simulations of the same structure for a given `N`):

``````@time(test1(N=10000));
@time(test2(N=10000));
@time(test3(N=10000));

>>> 0.378776 seconds (140.03 k allocations: 1.124 GiB, 48.17% gc time)
>>> 0.001322 seconds (60.06 k allocations: 2.139 MiB)
>>> 0.000226 seconds (50 allocations: 242.844 KiB)
``````

So if there is a way of getting the third case to work, I think it’d significantly improve my performance.