I have noticed that using sparse interval matrices is a big performance hit:
using ValidatedNumerics, SparseArrays
import ValidatedNumerics.IntervalArithmetic.hull
n = 10000
P = sprand(n, n, 0.01)
Q = sprandn(n, n, 0.01)
hull(a::Real, b::Real) = hull(Interval(a), Interval(b))
AA = hull.(P,Q)
A = P + Q
b = ones(n, 1)
bb = AA * b
# A is a 10000×10000 SparseMatrixCSC{Float64,Int64} with 1988652 stored entries
# AA is a 10000×10000 SparseMatrixCSC{Interval{Float64},Int64} with 1988652 stored entries
# b is a 10000×1 Array{Float64,2}
#bb is a 10000×1 Array{Interval{Float64},2}
function product(A, b)
c = A*b
end
@time product(A, b);
@time product(AA, bb);
returns
0.007439 seconds (1.60 k allocations: 173.437 KiB)
0.150222 seconds (1.60 k allocations: 251.421 KiB)
on my machine. The higher allocations are expected, since the second function computes a vector of intervals rather than a vector of reals, but I was expecting it to be 2-4x slower, not 20x.
Am I doing something wrong? Is there a package where this operation is implemented in a more performant way? I checked IntervalMatrices.jl
but that seems to do something else entirely.