A few things:
- Read the Julia performance tips. No global variables etc.
- Julia is column-major, not row major. Prefer
x[:, n]overx[n, :] - JuMP prefers explicit summations over linear algebra. (In most cases we can do well, sometimes we hit a slow path. I need to dig in more to understand what’s going on.)
- You have lots of unneeded operations. It looks like your objective is summing over a symmetric matrix. Instead of going over every entry and then multiplying by
0.5, just do one triangle.
Putting that together, I get:
using JuMP, LinearAlgebra
function old_foo(N)
K = (xₙ, xₘ) -> (1 + xₙ ⋅ xₘ)^2
C = 0.01
x = rand(N, 2)
y = rand(N)
model = Model()
@variable(model, 0 <= α[1:N] <= C)
@objective(model,Min,0.5sum(@views α[n]α[m]y[n]y[m]K(x[n,:], x[m,:]) for n ∈ 1:N, m ∈ 1:N) - sum(α))
@constraint(model,α ⋅ y == 0)
return model
end
function new_foo(N)
C = 0.01
x = rand(2, N)
y = rand(N)
function K(n, m)
tmp = sum(x[i, n] * x[i, m] for i in 1:2)
return (n == m ? 0.5 : 1) * y[n] * y[m] * (1 + tmp)^2
end
model = Model()
@variable(model, 0 <= α[1:N] <= C)
@objective(
model,
Min,
sum(K(n, m) * α[n] * α[m] for n in 1:N, m in 1:n) -
sum(α[i] for i in 1:N),
)
@constraint(model, sum(α[i] * y[i] for i in 1:N) == 0)
return model
end
GC.gc()
@time old_foo(2000);
GC.gc()
@time new_foo(2000);
( I didn’t want to run the old_foo(7000), it took too long)
**julia>** GC.gc()
**julia>** @time old_foo(2000);
8.672760 seconds (172.02 M allocations: 13.852 GiB, 17.61% gc time)
**julia>** GC.gc()
**julia>** @time new_foo(2000);
1.948423 seconds (16.02 M allocations: 1.395 GiB, 16.93% gc time)
**julia>** @time new_foo(7000);
30.913255 seconds (196.08 M allocations: 16.975 GiB, 12.20% gc time)