# Fast sparse interval matrices?

I have noticed that using sparse interval matrices is a big performance hit:

``````using ValidatedNumerics, SparseArrays
import ValidatedNumerics.IntervalArithmetic.hull

n = 10000
P = sprand(n, n, 0.01)
Q = sprandn(n, n, 0.01)
hull(a::Real, b::Real) = hull(Interval(a), Interval(b))
AA = hull.(P,Q)
A = P + Q
b = ones(n, 1)
bb = AA * b

# A is a 10000×10000 SparseMatrixCSC{Float64,Int64} with 1988652 stored entries
# AA is a 10000×10000 SparseMatrixCSC{Interval{Float64},Int64} with 1988652 stored entries
# b is a 10000×1 Array{Float64,2}
#bb is a 10000×1 Array{Interval{Float64},2}

function product(A, b)
c = A*b
end

@time product(A, b);
@time product(AA, bb);
``````

returns

``````  0.007439 seconds (1.60 k allocations: 173.437 KiB)
0.150222 seconds (1.60 k allocations: 251.421 KiB)
``````

on my machine. The higher allocations are expected, since the second function computes a vector of intervals rather than a vector of reals, but I was expecting it to be 2-4x slower, not 20x.

Am I doing something wrong? Is there a package where this operation is implemented in a more performant way? I checked `IntervalMatrices.jl` but that seems to do something else entirely.

Am I doing something wrong?

Hmm I don’t think so. Comparing dense-dense and sparse-sparse products with interval coeffs also gives ~30x slowdown for your case:

``````using IntervalArithmetic, BenchmarkTools, SparseArrays

for n in [10^2, 10^3, 10^4]
A = rand(n, n)
b = rand(n)
Aint = Interval.(A)
bint = Interval.(b)

res = @benchmark \$A * \$b
resint = @benchmark \$Aint * \$bint

println("n = \$n, ", minimum(resint.times) / minimum(res.times))
end

n = 100, 327.13601841196777
n = 1000, 352.0812585088954
n = 10000, 126.64196957958376

for n in [10^2, 10^3, 10^4]
A = sprandn(n, n, 0.1)
b = sprandn(n, 0.1)
Aint = Interval.(A)
bint = Interval.(b)

res = @benchmark \$A * \$b
resint = @benchmark \$Aint * \$bint

println("n = \$n, ", minimum(resint.times) / minimum(res.times))
end

n = 100, 8.976806165477319
n = 1000, 39.30257168866636
n = 10000, 32.14636022251558
``````

Is there a package where this operation is implemented in a more performant way?

See eg. this paper by Ozaki et al, though i’m not aware of Julia implementations of such methods!

I checked `IntervalMatrices.jl` but that seems to do something else entirely.

We develop that package in order to get tighter results on some interval matrix operations (mainly matrix powers and exponentiation); for example, consider this simple case (taken from this paper):

``````julia> using IntervalMatrices

julia> A = [1 0 .. 1; 1 -1]
2×2 Array{Interval{Float64},2}:
[1, 1]    [0, 1]
[1, 1]  [-1, -1]

julia> A * A
2×2 Array{Interval{Float64},2}:
[1, 2]  [-1, 1]
[0, 0]   [1, 2]

julia> square(IntervalMatrix(A))
2×2 IntervalMatrix{Float64,Interval{Float64},Array{Interval{Float64},2}}:
[1, 2]  [0, 0]
[0, 0]  [1, 2]
``````

With the “naive” product you get the interval `-1 .. 1` for the `[1, 2]` coefficient, but reordering the way in that the products are taken, you get zero width, `0 .. 0`.

That said, it may be worth that you share what is your “final” problem since there may be other strategies, eg. doing some intermediate operations lazily.