JuMP vs SAS

@Dominic_Pazzula
1 welcome to Julia Discourse
2 When ppl benchmark program speeds they usually wanna put the program inside a function & separately do a warmup run to separate “compile time” from “run time”.
Consider below:

using BenchmarkTools
using Ipopt
using JuMP

function f()
    model = Model(with_optimizer(Ipopt.Optimizer, print_level=0))
    @variable(model, x, start = 0.0)
    @variable(model, y, start = 0.0)

    @NLobjective(model, Min, (1 - x)^2 + 100 * (y - x^2)^2)
    optimize!(model)
    println("x = ", value(x), " y = ", value(y))

    # adding a (linear) constraint
    @constraint(model, x + y == 10)
    optimize!(model)
    println("x = ", value(x), " y = ", value(y))
end

@time f() #warmup: 15.044558 seconds (44.22 M allocations: 2.180 GiB, 7.24% gc time)
@time f() #0.012105 seconds (2.12 k allocations: 145.898 KiB)
@benchmark f() #

function g()
    model = Model(with_optimizer(Ipopt.Optimizer, print_level=0))
    @variable(model, x, start = 0.0)
    @variable(model, y, start = 0.0)

    @NLobjective(model, Min, (5 - x)^2 + 100 * (y - x^2)^2)
    optimize!(model)
    println("x = ", value(x), " y = ", value(y))

    # adding a (linear) constraint
    @constraint(model, x + y == 10)
    optimize!(model)
    println("x = ", value(x), " y = ", value(y))
end

@time g() #0.077067 seconds (97.75 k allocations: 4.977 MiB)
@time g() #0.023970 seconds (2.50 k allocations: 162.836 KiB)
@benchmark g() #

You only have to pay the price of compile the first time you open Julia REPL. The founders are well aware of this & reducing compiler latency is a top priority.
I personally only minded it my first few weeks w/ Julia.

3 Likes