Convex.jl: slow in creating constraints?

Hello,

I have code that builds constraints for Convex.jl in a loop; each iteration of the loop takes ~5 minutes (on fairly recent hardware). Is this expected or am I doing something wrong?

# solution vector
X = Variable(m*(num_frames+1));

# break it by frame index `t`
x = [X[(1 + (t-1)*m) : (t*m)]
     for t in 1:(1+num_frames)];

# build constraints
A = [];
for t in 1:(num_frames+1)
  push!(A, let
     # 0. introduce sign perturbation
     if t == 1
       a0 = x[1];
     else
       a0 = (x[t] .* rand([+1,-1], m));
     end
     # 1. convolution with `Q`; is a vector of size m-q+1
     a1 = conv(Q, a0);
     # 2. project into lo-res space (`P` is a constant dense matrix already created)
     a2 = P*a1;
     # fetch the corresponding lo-res vector out of `y`
     b = reshape(y[:, :, t], n);
     # now return the effective constraint
     norm(b-a2, 2) < ϵ
   end);
end

# ... later on:
problem = minimize(norm(X,1), A...);
solve!(problem, solver);

For context: I have been experimenting a bit with Convex.jl for an image processing application: in practice, what I am trying to do is a kind of deconvolution with modifications that should give a better result. The matrices involved in some constraints are very large and quickly overflow memory; however, they are block diagonal so I am trying to use multiple constraints (1 per block).

Thanks for any suggestion!

Riccardo

1 Like

I should probably give sizes of the vectors involved:

  • X has ~500’000 entries
  • P has ~2500 x 10’000 entries
  • Q has ~900 entries
  • the loop is run 50 times

Which version of Convex are you on?

Either way, Convex currently represents variables, constraints, etc in a “vectorized way”, meaning scalar operations (like doing lots of indexing) will be slower than equivalent vector operations. Note that Convex.jl#master sends the problem to MathOptInterface (MOI) which represents everything with scalars; I’m guessing this mismatch isn’t ideal for performance. Anyway, the takeaway there is to formulate things with vector operations and not scalar operations when possible.

With that in mind, I think it might work better to write x = [Variable(m) for _ = 1:num_frames] and write the objective as objective = sum(norm(x[i], 1) for i = 1:num_frames), using that the 1-norm is just the sum of absolute values anyway. I’m guessing this transformation should help to avoid indexing into X everywhere.

Note that the broadcasted x[t] .* rand([+1,-1], m)) doesn’t actually lower to scalar operations since there’s a method for it, so that should be fine.

I don’t see any other immediate wins though (but note I’m not really an expert; I’ve been lucky to only have a need to solve relatively small problems so far). There’s the general performance tips like putting it all in a function and using an @views on the y[:,:,t], but I suspect those things aren’t the bottleneck here.

Please let me know if defining x by a comprehension helps!

Hello @ericphanson! Many thanks for your answer and the suggestion; I’ll implement it and write back here.

For the record, I have tried Convex.jl version 0.12.5 and 0.12.6 on Julia 1.2.0 and 1.3.1; here’s my most recent Convex.jl entry from the manifest file:

[[Convex]]
deps = ["AbstractTrees", "BenchmarkTools", "LinearAlgebra", "MathOptInterface", "OrderedCollections", "SparseArrays", "Test"]
git-tree-sha1 = "98653ab2bf58b2a6518c662c5ab896f1d0a82c2d"
repo-rev = "MathOptInterface"
repo-url = "https://github.com/ericphanson/Convex.jl"
uuid = "f65535da-76fb-5f13-bab9-19810c17039a"
version = "0.12.6
1 Like

Hello Riccardo, not sure (really) whether that would help but i did see a massive performance boost going to convex #0.13 development version. See this thread:Convex.jl : remarkable increase in perf in dev version (0.13.0) compared to v0.12.6, (now as fast as R/CVXR)

You may want to try it!

[and thanks for your issue #163 on SCS.jl, It enabled me to compile scs (the C library) in a julia-compliant way, not a piece of cake! :crazy_face:]

1 Like

Unfortunately using x = [Variables(m) for _ in 1:N] did not speed up things in any significant way. I’ll try @reumle 's suggestion of checking out the newer Convex.jl code.

1 Like

Hello @reumle

thanks for your suggestion! I do not see any Convex.jl tagged 0.13.0 – is the “master” branch what you’re referring to?

and thanks for your issue #163 on SCS.jl, It enabled me to compile scs

Glad to know it’s been useful and not just me annoying the developers :slight_smile:

Hi @ riccardomurri, here is my entry when I Type status
[f65535da] Convex v0.13.0 #master (https://github.com/JuliaOpt/Convex.jl.git)

I do not really remember how i did it but the package documentation at 3. Managing Packages · Pkg.jl. gives a lot of hints:

  1. you can ask for a specific branch/commit/ (tag?)
    A specific version can be installed by appending a version after a @ symbol, e.g. @v0.4 , to the package name:
(v1.0) pkg> add Example@0.4
 Resolving package versions...
  1. If it does not work , my bet is you can try to install Convex as an unregistered package. Quoting Pkg doc again:

Adding unregistered packages

If a package is not in a registry, it can still be added by instead of the package name giving the URL to the repository to add :

(v1.0) pkg> add https://github.com/fredrikekre/ImportMacros.jl
  Updating git-repo `https://github.com/fredrikekre/ImportMacros.jl`

… For unregistered packages we could have given a branch name (or commit SHA1) to track using # , just like for registered packages.

Hope it helps!

Thanks for the detailed instructions @reumle ! I installed Convex#master but unfortunately this didn’t change the running times. I guess I’ll have to restructure the code in some other way to get the speedups…

It might be worth trying JuMP; I think it is likely more performant here, and it looks like most of your transformations are linear, which I think are easy to do in JuMP (although I don’t have much experience with JuMP).

I am hoping to improve Convex.jl’s performance at some point by rewriting the internals to lower more directly to MOI objects, as JuMP does-- but I’m not sure when I’ll have time to do so (I’m writing my PhD thesis now, on mostly unrelated things…). Maybe in August? :sweat_smile:

Possibly someone else will step in before then.

Hello @ericphanson

First of all: many thanks for your support so far! and please excuse my delay in replying. I too am working part-time on this Julia code so I know very well what you are talking about…

I had a look at the code in expression.jl and the other source files where the functions that I’m using are defined, and the only possible culprit that I could see is the fact that the hash field initialization sweeps the passed structures recursively. A first attempt at confirming this using profiling data gave this trace:

julia> Profile.print(mincount=100)
26137 ./task.jl:333; (::REPL.var"#26#27"{REPL.REPLBackend})()
 26137 ...4/build/usr/share/julia/stdlib/v1.3/REPL/src/REPL.jl:118; macro expansion
  26137 ...4/build/usr/share/julia/stdlib/v1.3/REPL/src/REPL.jl:86; eval_user_input(::Any, ::REPL.REPLBackend)
   26137 ./boot.jl:330; eval(::Module, ::Any)
    26137 ...server-common/data/homes/rmurri/BEAM.jl/src/BEAM.jl:642; beam2(::Int64, ::Array{Float64,3}, ::Array{Float64,2}, ::Array{Float64,2}, ::A...
     26137 ...erver-common/data/homes/rmurri/BEAM.jl/src/BEAM.jl:701; beam2(::Int64, ::Array{Float64,3}, ::Array{Float64,2}, ::Array{Float64,2}, ::...
      26137 ...usr/share/julia/stdlib/v1.3/Profile/src/Profile.jl:25; macro expansion
       26137 ./array.jl:564; map
        26137 ./array.jl:635; _collect(::UnitRange{Int64}, ::Base.Generator{UnitRange{Int64},BEAM.var"#14...
         26137 ...rver-common/data/homes/rmurri/BEAM.jl/src/BEAM.jl:706; iterate
          26137 ...ver-common/data/homes/rmurri/BEAM.jl/src/BEAM.jl:989; make_constraint_A(::Array{Convex.Variable,1}, ::Int64, ::Array{Float64,2},...
           26137 ...er-common/data/homes/rmurri/BEAM.jl/src/BEAM.jl:1021; conv2d
            26118 ./array.jl:627; collect(::Base.Generator{Base.Iterators.ProductIterator{Tuple{UnitRange{I...
             26117 ./array.jl:646; collect_to_with_first!
              26117 ./array.jl:667; collect_to!(::Array{Convex.AdditionAtom,2}, ::Base.Generator{Base.Iterat...
               26117 ./generator.jl:47; iterate
                26117 ./none:0; #25
                 26116 ./reduce.jl:412; sum
                  26116 ./reduce.jl:395; sum
                   26116 ./reduce.jl:200; mapreduce
                    26116 ./reduce.jl:200; #mapreduce#194
                     26116 ./reduce.jl:72; mapfoldl
                      26116 ./reduce.jl:72; #mapfoldl#186
                       26113 ./reduce.jl:61; mapfoldl_impl(::Function, ::Function, ::NamedTuple{(),Tuple{}}, ::B...
                        338   ./reduce.jl:47; mapfoldl_impl(::typeof(identity), ::typeof(Base.add_sum), ::NamedT...
                         337 ./generator.jl:47; iterate
                          310 ./none:0; #26
                           168 ...ges/Convex/NF6BS/src/atoms/affine/index.jl:83; getindex
                            168 ...ges/Convex/NF6BS/src/atoms/affine/index.jl:80; getindex
                             168 ...ges/Convex/NF6BS/src/atoms/affine/index.jl:17; IndexAtom
                        25768 ./reduce.jl:49; mapfoldl_impl(::typeof(identity), ::typeof(Base.add_sum), ::NamedT...
                         25768 ./reduce.jl:21; add_sum
                          25768 ...vex/NF6BS/src/atoms/affine/add_subtract.jl:117; +
                           388   ...ex/NF6BS/src/atoms/affine/add_subtract.jl:70; Convex.AdditionAtom(::Convex.AdditionAtom, ::Convex.MultiplyAtom)
                            388 ./array.jl:827; append!
                           860   ...ex/NF6BS/src/atoms/affine/add_subtract.jl:77; Convex.AdditionAtom(::Convex.AdditionAtom, ::Convex.MultiplyAtom)
                            860 ./array.jl:869; push!
                             860 ./array.jl:827; _growend!
                           24416 ...ex/NF6BS/src/atoms/affine/add_subtract.jl:79; Convex.AdditionAtom(::Convex.AdditionAtom, ::Convex.MultiplyAtom)
                            24073 ./hashing.jl:18; hash
                             24070 .../packages/Convex/NF6BS/src/expressions.jl:46; hash(::Array{Convex.AbstractExpr,1}, ::UInt64)
                              243   ./hashing.jl:23; hash
                               215 ./hashing.jl:63; hash_uint
                              23800 ./reflection.jl:312; hash
198   ...urri/.julia/packages/Convex/NF6BS/src/expressions.jl:46; hash(::Array{Convex.AbstractExpr,1}, ::UInt64)
 198 ./reflection.jl:312; hash

Do you see an obvious way to speed that up?

Thanks,
R

I can confirm it’s the hash() function in expressions.jl that causes the slowness; if I replace its definition with return rand(UInt64); then the running time drops to <1 minute.

Ah, good profiling! The hashing is one of the bits of the Convex.jl codebase I understand least. That method in expressions.jl does look odd to me, because there is already a generic method for hashing arrays, and I’m not sure why we would need a special one. However, if I just remove the method, the tests show lots of errors of with things like

  MethodError: no method matching !(::Convex.EqConstraint)

It looks like that’s coming from the hash method in Julia’s abstractarrays.jl file calling isequal which falls back to ==, which Convex.jl overloaded to create equality constraints.

That suggests instead of creating a custom hash method that avoids the call to isequal, we should just implement isequal between AbstractExprs and use the generic (and hopefully performant) hash function from Base.

How should we implement isequal? Well, every AbstractExpr in Convex already has an id_hash field. I was not around during the design discussions of Convex, but the paper says

One novel aspect of Convex is that expressions are identified via a unique identifier computed as a hash on the name of the atom at the head of the AST, together with the unique identifiers of its children in the AST. Thus, two different instances of the same expression will automatically be assigned the same unique identifier. This identification allows Convex to identify when the same expression is used in multiple places in the problem, to memoize and reuse the results of computations on a given expression, and to reduce the complexity of the problem that is ultimately passed to the solver.

I think id_hash is this unique identifier, and thus should give us a good way to compare equality. That is, we could add the method

Base.isequal(a::AbstractExpr, b::AbstractExpr) = a.id_hash == b.id_hash

And indeed, if I add that method and remove the hash method from expressions.jl, the tests pass.

I am a bit suspicious that this could introduce very subtle bugs, however. That’s because in my time looking around Convex.jl’s source base in the past, I’ve noticed that the atoms (which are all AbstractExpr’s) don’t always hash in their head field. For example, real and imag populate their id_hash just by hashing their children. That means with the isequal method above, we get

julia> a = ComplexVariable()
Variable
size: (1, 1)
sign: complex
vexity: affine
id: 153…265

julia> isequal(real(a), imag(a))
true

However, if we look at how these id_hashs are used, they are either used alone when they correspond to Variables, or they are used in pairs as (exp.head, exp.id_hash) for an expression exp. That suggests an easy way to work around our problem. We could instead define the method as

Base.isequal(a::AbstractExpr, b::AbstractExpr) = a.id_hash == b.id_hash && a.head == b.head

and I think that should work.

Could you try removing the hash method from expressions.jl and adding that last isequal method, and seeing if that solves the performance issue? (And does the output look right, or could there be some subtle bug?)

Actually, I think what I wrote above isn’t quite what we should do.

My understanding is that isequal means == (with slight differences for floating point numbers), and is what is used in dictionaries etc for comparing equality. And according to the docs, it should have the property that isequal(a,b) == true means hash(a) == hash(b). That means we need to at least also add a method

Base.hash(a::AbstractExpr, h::UInt) = hash((a.id_hash, a.head), h)

in addition to the isequal one above so that we do have the property that isequal(a,b) == true means hash(a) == hash(b).

But looking in the source of Julia’s hash(A::AbstractArray, h::UInt) method, they mention colliding hashes will often be “compared by equality”, meaning isequal. So isequal should be a strict test of semantic equality, and hash should be a fast shortcut to doing some (but could suffer collisions).

So I think defining isequal in terms of the id_hash fields (which are in turn defined by hashes) isn’t really quite right, because we are losing our “strict” test and saying let’s just always do the fast shortcut (which means hash collisions could be a problem). This was actually already brought up some years ago in Move away from hashing · Issue #78 · jump-dev/Convex.jl · GitHub, because we already rely on this fastpath for the dictionaries Convex.jl uses internally. So… I think maybe then it’s fine for now to add the two methods for hash(a::AbstractExpr, h::UInt) and isequal(a::AbstractExpr, b::AbstractExpr) instead of the slow hash(a::Array{AbstractExpr}, h::UInt) method, because we shouldn’t be any more vulnerable to hash collisions than we already are.

Edit: if you want to try this proposal, add the hashing branch: ] add Convex#hashing. See also the associated PR.

if it’s really computing hash, then You would be much better by defining hash(a::Array{<:AbstractExpr}, h::UInt) = h as this is the minimal correct definition (for mutable structs) .

@ericphanson note that hash for array doesn’t actually hash all of elements

I saw that; the source code mentions hashing log(n) of them (if n is the length of the array). I don’t really understand why or why hash(a::Array{<:AbstractExpr}, h::UInt) = h is correct. Do you mind explaining?

the implication is x == y → hash(x) == hash(y); since in the latter are hash(..., UInt(0)) = UInt(0) in our definition this makes the implication always true.
Julia uses address as a default value of hash which is incorrect if your type is mutable. There was an issue about this somewhere and the developers even agreed (or my memory plays tricks on me:), but I don’t know what happened then. This is the reason that with mutable structs you always need to define hash alongside ==.

julia> mutable struct MyType val::Int end

julia> Base.:(==)(x::MyType, y::MyType) = x.val == y.val

julia> length(unique([MyType(1) for _ in 1:1000]))
598

julia> length(unique([MyType(1) for _ in 1:1000]))
593

julia> Base.hash(x::MyType, h::UInt) = h

julia> length(unique([MyType(1) for _ in 1:1000]))
1

EDIT: https://github.com/JuliaLang/julia/issues/12198

@ericphanson many thanks for preparing the PR in the #hashing branch of Convex.jl;
I’ve given it a try and unfortunately it seems not to solve the problem but likely just shift it around. Please let me know if you would like this conversation to continue in the PR or here.

I have again profiled my code (full tree dump below); this time the “hot” code path seems to be the following:

81108 ...vex/qMsri/src/atoms/affine/add_subtract.jl:117; +
 ...
 79090 ...ex/qMsri/src/atoms/affine/add_subtract.jl:79; Convex.AdditionAtom(::Convex.AdditionAtom, ::Convex.MultiplyAtom)
  78922 ./hashing.jl:18; hash
   ...
   35415 ./abstractarray.jl:2204; hash(::Array{Convex.AbstractExpr,1}, ::UInt64)
   ...
   39863 ./abstractarray.jl:2223; hash(::Array{Convex.AbstractExpr,1}, ::UInt64)

Looking at add_subtract.jl:79, I see this:

        return new(:+, hash(children), children, sz)

which might be causing recursion over the entire tree of operations?

Now: if id_hash is going to be used as a “fingerprint” to quickly test if two subexpressions are different, then Convex.jl needs to compute it for every subexpression; because of the way addition (and multiplication, etc.) atoms are built, the hash has to be derived from all children and there seems to be no better general solution than hash(::Array{AbstractExpr}). But then, when large and deep expressions are built, this just seems to be slow…

According to @abulak’s remark I have tried:

Base.hash(a::AbstractExpr, h::UInt) = h

In my understanding, this would speed up the hashing but then make equality comparisons much slower; however this makes my code die in solve!() with KeyError: key 0xf4f7cf55a81fe6ca not found

Now I am trying by re-adding the old definition for hash(::Array{AbstractExpr}, ::UInt64) to see if combined with the new simpler definition of hash() for single AbstractExpr’s it gives some better speed-up:

const hashaa_seed = UInt === UInt64 ? 0x7f53e68ceb575e76 : 0xeb575e7
Base.hash(a::AbstractExpr, h::UInt) = hash((a.id_hash, a.head), h)
function Base.hash(a::Array{AbstractExpr}, h::UInt)
    h += hashaa_seed
    h += hash(size(a))
    for x in a
        h = hash(x, h)
    end
    return h
end

Unfortunately, I do not see any speed-up with this either…

Anyway, here’s the full profile dump; it’s entirely possible that I picked out the wrong section as the culprit for slowness:

julia> Profile.print(mincount=100)
81809  ./task.jl:268; (::getfield(Revise, Symbol("##80#82")){REPL.REPLBackend})()
 81809 /home/rmurri/.julia/packages/Revise/439di/src/Revise.jl:975; run_backend(::REPL.REPLBackend)
  81809 ...4/build/usr/share/julia/stdlib/v1.2/REPL/src/REPL.jl:86; eval_user_input(::Any, ::REPL.REPLBackend)
   81809 ./boot.jl:330; eval(::Module, ::Any)
    81809 /home/rmurri/w/BEAM.jl/src/BEAM.jl:59; beam(::Int64, ::Array{Float64,3}, ::Array{Float64,2}, ::Array{Float64,2}, ::Ar...
     81809 /home/rmurri/w/BEAM.jl/src/BEAM.jl:110; beam(::Int64, ::Array{Float64,3}, ::Array{Float64,2}, ::Array{Float64,2}, ::A...
      81809 ./abstractarray.jl:2073; map
       81809 ./array.jl:548; collect_similar
        81809 ./array.jl:619; _collect(::UnitRange{Int64}, ::Base.Generator{UnitRange{Int64},getfield(BEA...
         81809 /home/rmurri/w/BEAM.jl/src/BEAM.jl:115; iterate
          81809 /home/rmurri/w/BEAM.jl/src/BEAM.jl:386; make_constraint_A(::Array{Convex.Variable,1}, ::Int64, ::Array{Float64,2},...
           81770 /home/rmurri/w/BEAM.jl/src/BEAM.jl:421; conv2d(::Array{Float64,2}, ::Convex.Variable)
            81761 ./array.jl:611; collect(::Base.Generator{Base.Iterators.ProductIterator{Tuple{UnitRange{I...
             81760 ./array.jl:630; collect_to_with_first!
              81760 ./array.jl:651; collect_to!(::Array{Convex.AdditionAtom,2}, ::Base.Generator{Base.Iterat...
               81760 ./generator.jl:47; iterate
                81760 ./none:0; #15
                 81760 ./reduce.jl:420; sum
                  81760 ./reduce.jl:403; sum
                   81760 ./reduce.jl:208; mapreduce
                    81760 ./reduce.jl:208; #mapreduce#192
                     81760 ./reduce.jl:72; mapfoldl
                      81760 ./reduce.jl:72; #mapfoldl#188
                       81758 ./reduce.jl:61; mapfoldl_impl
                        649   ./reduce.jl:47; mapfoldl_impl(::typeof(identity), ::typeof(Base.add_sum), ::NamedT...
                         648 ./generator.jl:47; iterate
                          640 ./none:0; (::getfield(BEAM, Symbol("##16#18")){Int64,Int64,Array{Float64,2}...
                           202 ...ges/Convex/qMsri/src/atoms/affine/index.jl:83; getindex
                            202 ...ges/Convex/qMsri/src/atoms/affine/index.jl:80; getindex
                             199 ...ges/Convex/qMsri/src/atoms/affine/index.jl:17; Type
                              173 ./hashing.jl:18; hash
                               173 ./tuple.jl:303; hash
                                115 ./tuple.jl:303; hash(::Tuple{UnitRange{Int64},UnitRange{Int64},Nothing}, ::UIn...
                           342 .../qMsri/src/atoms/affine/multiply_divide.jl:102; *(::Float64, ::Convex.IndexAtom)
                            325 .../qMsri/src/atoms/affine/multiply_divide.jl:30; Type
                             254 ./hashing.jl:18; hash
                              254 ./tuple.jl:303; hash
                               190 .../packages/Convex/qMsri/src/expressions.jl:38; hash
                        81108 ./reduce.jl:49; mapfoldl_impl(::typeof(identity), ::typeof(Base.add_sum), ::NamedT...
                         81108 ./reduce.jl:21; add_sum
                          81108 ...vex/qMsri/src/atoms/affine/add_subtract.jl:117; +
                           499   ...ex/qMsri/src/atoms/affine/add_subtract.jl:70; Convex.AdditionAtom(::Convex.AdditionAtom, ::Convex.MultiplyAtom)
                            497 ./array.jl:811; append!
                           1410  ...ex/qMsri/src/atoms/affine/add_subtract.jl:77; Convex.AdditionAtom(::Convex.AdditionAtom, ::Convex.MultiplyAtom)
                            1408 ./array.jl:853; push!
                             1408 ./array.jl:811; _growend!
                           79090 ...ex/qMsri/src/atoms/affine/add_subtract.jl:79; Convex.AdditionAtom(::Convex.AdditionAtom, ::Convex.MultiplyAtom)
                            78922 ./hashing.jl:18; hash
                             2477  ./abstractarray.jl:2197; hash(::Array{Convex.AbstractExpr,1}, ::UInt64)
                              2477 ./indices.jl:446; getindex
                             158   ./abstractarray.jl:2203; hash(::Array{Convex.AbstractExpr,1}, ::UInt64)
                              117 ./array.jl:728; getindex
                             35415 ./abstractarray.jl:2204; hash(::Array{Convex.AbstractExpr,1}, ::UInt64)
                              7722 ./pair.jl:15; Pair(::Int64, ::Convex.MultiplyAtom)
                               7594 ./pair.jl:12; Type
                              8443 ./pair.jl:52; hash(::Pair{Int64,Convex.MultiplyAtom}, ::UInt64)
                               323  ./float.jl:565; hash
                                262 ./float.jl:561; hx
                                 223 ./hashing.jl:62; hash_uint64
                               7800 ...packages/Convex/qMsri/src/expressions.jl:38; hash
                                1192 ./tuple.jl:303; hash(::Tuple{UInt64,Symbol}, ::UInt64)
                                 585 ./float.jl:564; hash
                                  187 ./float.jl:66; Type
                                  372 ./float.jl:561; hx
                                   281 ./hashing.jl:62; hash_uint64
                                 583 ./tuple.jl:303; hash
                                  245 ./hashing.jl:23; hash
                                   189 ./hashing.jl:63; hash_uint
                                  311 ./reflection.jl:312; hash
                             39863 ./abstractarray.jl:2223; hash(::Array{Convex.AbstractExpr,1}, ::UInt64)
                              149   ./array.jl:0; findprev(::getfield(Base, Symbol("##60#61")){Base.Fix2{typeof...
                              173   ./array.jl:1876; findprev(::getfield(Base, Symbol("##60#61")){Base.Fix2{typeof...
                              225   ./array.jl:1878; findprev(::getfield(Base, Symbol("##60#61")){Base.Fix2{typeof...
                               100 ./int.jl:49; <
                              2817  ./array.jl:1880; findprev(::getfield(Base, Symbol("##60#61")){Base.Fix2{typeof...
                               123 ./array.jl:728; getindex
                              26959 ./operators.jl:894; !
                              3259  ./operators.jl:939; isequal(::Convex.MultiplyAtom)
                               3178 ./operators.jl:924; Type
575332 ./task.jl:327; task_done_hook(::Task)
 575332 ./task.jl:591; wait()
  575332 ./task.jl:564; poptaskref(::Base.InvasiveLinkedListSynchronized{Task})

I have spent some more time digging into it, and now I’m convinced that the key point is the performance of method hash(a::Array{Convex.AbstractExpr,1}, h::UInt).

  • With the Convex.jl#hashing branch, this method falls back on the generic hash(::AbstractArray) one, and one iteration of my code takes about 40 minutes.

  • With Convex.jl#master, this method has a custom implementation in expressions.jl and one iteration of my code takes about 15 minutes:

      const hashaa_seed = UInt === UInt64 ? 0x7f53e68ceb575e76 : 0xeb575e7
      function hash(a::Array{AbstractExpr}, h::UInt)
          h += hashaa_seed
          h += hash(size(a))
          for x in a
              h = hash(x, h)
          end
          return h
      end
    
  • With this for-loop instead, one iteration of my code takes about 7 minutes:

      function hash(a::Array{AbstractExpr}, h::UInt)
          h += hashaa_seed
          h += hash(size(a))
          for x in a
              h = (h * hashaa_seed) ⊻ x.id_hash
          end
          return h
      end
    
  • Changing the loop into a map/reduce, one iteration takes about 4 minutes:

      function hash(a::Array{AbstractExpr}, h::UInt)
          h += hashaa_seed
          h += hash(size(a))
          h += reduce(
              (x1, x2) -> ((x1 * hashaa_seed) ⊻ x2),
              map((x -> x.id_hash), a),
          )
          return h
      end
    
  • By just using a random number (which is the likely the minimal possible code so that all expression arrays hash different), one iteration takes about 1 minute:

      hash(a::Array{AbstractExpr}, h::UInt) = rand(UInt);
    

Surprisingly enough, reduce(..., map(...)) gives a 2x speedup over the for loop but also over mapreduce(...).

(All measurements done on Julia 1.2 using 1 on a 4-thread laptop computer.)

1 Like

Thanks very much for the benchmarks! It looks like a decent speed-up of the hashing code is possible then. I’d be happy of course to take a PR if we can be sure it doesn’t affect correctness.

I get what @abulak means now with x == y → hash(x) == hash(y) but I don’t think that’s the only requirement Convex.jl has. Note also that I think == is a bit beside the point now that we have isequal; I think the fact that we are using == for our own purposes (making constraints) is more okay now than it was a long time ago, because we can separately define isequal which is what should be used for equality comparison for dictionaries and the like.

Internally, Convex uses OrderedDictionaries keyed by hashes to store expressions. What we need from our hashing is for that work, i.e., we need those keys to be identical if and only if Convex.jl should treat the expressions as identical. This is used for caching subexpressions so we don’t need to regenerate their conic forms if we already have, within a particular solve call. But I think the way Convex is designed, it’s also needed for correctness: if we give the “same” variable two different hashes, for example, I think they’ll get treated as two separate variables in the optimization problem. So at least for variables we need to make sure the semantically-the-same objects get hashed the same, and maybe the same is true for other AbstractExprs.

I think long-term we should move away from the dictionaries, but that’s a bit of a separate discussion.