Convex.jl : remarkable increase in perf in dev version (0.13.0) compared to v0.12.6, (now as fast as R/CVXR)

The new version performs magnitudes better than its predecessor on the (simple) benchmark described in Convex.jl (+SCS) more than 100* slower than R counterpart, CVXR + SCS - #8 by reumle.

This enables me to try problems at least 100 times the size i could do in early december :smiley:.
Great progress!. Thanks and congratulations to the package maintainer(s)!

  • A part of the better performance is the switch to julia 1.3 (cf comment by ericphanson ,post #9 at that thread).

  • Also in the new version, running times are linear of the problem size, whereas v0.12.x they increased quadratically.

  • Perf is now essentially identical to R/CVXR.

Timings

  • convex.jl v0.13 , in secs.
5Γ—5 DataFrame
β”‚ Row β”‚ nCol  β”‚ nRow    β”‚ moment1                 β”‚ setupTm β”‚ solveTm β”‚
β”‚     β”‚ Int64 β”‚ Int64   β”‚ DateTime                β”‚ Float64 β”‚ Float64 β”‚
β”œβ”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ 1   β”‚ 36    β”‚ 10000   β”‚ 2019-12-07T10:22:49.093 β”‚ 0.003   β”‚ 0.641   β”‚
β”‚ 2   β”‚ 36    β”‚ 31600   β”‚ 2019-12-07T10:22:49.737 β”‚ 0.01    β”‚ 1.954   β”‚
β”‚ 3   β”‚ 36    β”‚ 100000  β”‚ 2019-12-07T10:22:51.701 β”‚ 0.037   β”‚ 6.152   β”‚
β”‚ 4   β”‚ 36    β”‚ 316000  β”‚ 2019-12-07T10:22:57.89  β”‚ 0.125   β”‚ 17.738  β”‚
β”‚ 5   β”‚ 36    β”‚ 1000000 β”‚ 2019-12-07T10:23:15.754 β”‚ 0.729   β”‚ 52.752  β”‚
  • convex.jl v0.12.6 (and julia 1.3)
    • Problem sizes here reduced by 10. Timings in that version increase as square of nRow
julia> df5
5Γ—5 DataFrame
β”‚ Row β”‚ nCol  β”‚ nRow   β”‚ moment1                 β”‚ setupTm β”‚ solveTm β”‚
β”‚     β”‚ Int64 β”‚ Int64  β”‚ DateTime                β”‚ Float64 β”‚ Float64 β”‚
β”œβ”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ 1   β”‚ 36    β”‚ 1000   β”‚ 2019-12-07T12:22:15.076 β”‚ 0.0     β”‚ 0.09    β”‚
β”‚ 2   β”‚ 36    β”‚ 3160   β”‚ 2019-12-07T12:22:15.166 β”‚ 0.0     β”‚ 0.486   β”‚
β”‚ 3   β”‚ 36    β”‚ 10000  β”‚ 2019-12-07T12:22:15.652 β”‚ 0.004   β”‚ 3.923   β”‚
β”‚ 4   β”‚ 36    β”‚ 31600  β”‚ 2019-12-07T12:22:19.579 β”‚ 0.011   β”‚ 38.047  β”‚
β”‚ 5   β”‚ 36    β”‚ 100000 β”‚ 2019-12-07T12:22:57.637 β”‚ 0.038   β”‚ 392.442 β”‚
  • R/CVXR
> df3
# A tibble: 5 x 5
   nCol    nRow moment              setupTm solveTm
  <dbl>   <dbl> <dttm>                <dbl>   <dbl>
1    36   10000 2019-12-07 10:28:11   0.140   0.792
2    36   31600 2019-12-07 10:28:12   0.131   2.28 
3    36  100000 2019-12-07 10:28:14   0.410   5.94 
4    36  316000 2019-12-07 10:28:21   0.968  17.7  
5    36 1000000 2019-12-07 10:28:39   2.70   55.2  

Code. (v0.13.)

  • You need to change 2 lines to run it under v0.12.6. See comments in the code
using Convex
using SCS
using Random
using Dates
using LinearAlgebra
using DataFrames

df5= DataFrame(nCol = Int64[], nRow= Int64[], moment1=DateTime[], setupTm=Float64[], solveTm=Float64[])

# v0.13
for nRow in [10000,31600,100000,316000,1000000]  #different problem sizes
# v0.12 : smaller sizes
# for nRow in [1000,3160,10000,31600,100000] #different problem sizes

  m=nRow
  n=36
  moment1=Dates.now()

  s = Variable(m)

  x = Variable(n); 

  A = randn(m,n)

  b = randn(m)


  p = Problem(:minimize,sum(s), [A*x - b <= s, A*x - b >= -s, x>-10, x< 10, sum(x) >10])
  moment2= Dates.now()
# version 0.13
  solve!(p,SCS.Optimizer( linear_solver= SCS.Direct, max_iters= 10))
# version 0.12
#  solve!(p,SCSSolver( linear_solver= SCS.Direct, max_iters= 10))

  #----------------------------------------- 3 solve problem
  moment3= Dates.now()

  #----------------------------------------- 4 collect timings.
  cc=[n,m,moment1,Dates.value(moment2-moment1)/1000,Dates.value(moment3-moment2)/1000]
  push!(df5,cc)
end
9 Likes

That’s great! Very glad to see the performance improvements. In v0.13 we switch the intermediary layer to MathOptInterface, which does most of the hard work and is the product of years of work by its developers. It really shows I think!

3 Likes