Are Rationals that expensive?


I’m defining an algebra structure on ℝⁿ, i.e. apart from standard (vector) addition I want to multiply vectors. The structure of vector mutliplication is encoded in (Cayley-) table p_m::Array{Int,2}, i.e. pm[i,j] = k when eᵢ⋅eⱼ = eₖ. The full function computing the product is

function algebra_multiplication{T<:Number}(X::Vector{T}, Y::Vector{T}, pm::Array{Int,2})
    result = zeros(X)
    for (i,x) in enumerate(X)
        if x != 0
            for (j, index) in enumerate(pm[i,:])
                if Y[j] != 0
                    result[index] += x*Y[j]
    return result

This works very well for floats:

@time algebra_multiplication(D,D,pm);
0.000064 seconds (32 allocations: 56.875 KB) 

However for arguments of type Rational{Int}, or Rational{BigInt} it is three orders of magnitude slower:

0.014855 seconds (54.67 k allocations: 2.264 MB)
0.043915 seconds (447.38 k allocations: 12.445 MB)

Is it expected, or should I dig more?


Can you show the code you’re using to time things? It’s not obvious to me why Rational{Int} is so slow. The slowness of Rational{BigInt} seems a lot less surprising.


It’s because it does a reduction at every step. Even if you write it as A*B with rational matrices it will be very flow. What we need is the ability to compute all of the elements and then, after the full multiplication, reduce each element once.


I think You got me!

I’ve already rewritten the old timing code and while I’m quite sure that I kept it the same, the results are no longer the same:

import Base: rationalize
function rationalize{T<:Integer, S<:Real}(::Type{T}, X::AbstractArray{S}; tol::Real=eps(eltype(X)))
    r(x) = rationalize(T, x, tol=tol)
    return r.(X)

function r{T<:Integer}(::Type{T}, a)
    return rationalize(T, a)

@time algebra_multiplication(D,D,pm);
D_rInt = r(Int, D)
@time algebra_multiplication(D_rInt, D_rInt, pm);
D_rBigInt = r(BigInt, D)
@time algebra_multiplication(D_rBigInt,D_rBigInt, pm);

  0.000068 seconds (32 allocations: 56.875 KB)
  0.000377 seconds (32 allocations: 99.500 KB)
  0.003704 seconds (54.43 k allocations: 1.537 MB)

btw. is the rationalize I wrote the way to broadcast functions over arrays (the lower r is just convenience function)?


Ok, it turns out that this multiplication is the bottleneck, the very good results above were due too simplistic example (i.e. vector D) chosen (which was very close to Int).
A real example has (the same code, different D) the following timing:

@time algebra_multiplication(D,D,pm);
D_rBigInt = r(BigInt, D)
@time algebra_multiplication(D_rBigInt,D_rBigInt, pm);
D_rInt = r(Int, D)
@time algebra_multiplication(D_rInt, D_rInt, pm);
  0.000111 seconds (248 allocations: 173.313 KB)
  0.138465 seconds (959.79 k allocations: 25.889 MB, 33.71% gc time)

 in +(::Rational{Int64}, ::Rational{Int64}) at ./rational.jl:200
 in algebra_multiplication(::Array{Rational{Int64},1}, ::Array{Rational{Int64},1}, ::Array{Int64,2}) at ./In[8]:7


I think it’s okay (although other people might have better suggestions). But I think you’ll be happy to know that in julia 0.6 broadcasting has become even more awesome, so you won’t need your custom function at all!

(in julia 0.6):

julia> rationalize.(Int, [1.0, 1.5, 2.0], tol=1e-3)
3-element Array{Rational{Int64},1}:


What is D, in fact?


Do You want an explicit D or a description?

D is a sparsely populated vector in ℝⁿ (or later -ℚⁿ).
The real example is supported on a couple hundreds of dimensions (coordinates) vector in ℝⁿ (n~5000); (hence checks against zeros) so there are only a couple hundreds actual multiplications and and index-lookup happening in the function.

Elements of D are given with precision 1e-7 (but just checked that the actual precision does not have impact on the calculations – the D above (with the very favourable timings) has support of couple of tens, not hundreds, sorry for that);

I can upload somewhere the actual D and pm if You want to investigate.


I think the request for D is to make your code reproducible. If it’s reproducible, it’s easier for people to test out your problem. It can be dummy data generated to show the problem.


it’s slightly different (dense), but easier to simulate and shows similar behaviour:

pm = rand(1:k, k, k)
D = rand(k).*rand(-10:10, k);
@time algebra_multiplication(D,D,pm);
D_rBigInt = r(BigInt, D)
@time algebra_multiplication(D_rBigInt,D_rBigInt, pm);
  0.000128 seconds (195 allocations: 52.230 KB)
  0.848864 seconds (614.51 k allocations: 19.733 MB)


Doing lots of computations with rationals is not terribly practical because the size of the denominator tends to grow rapidly. Either you use Rational{Int} and it quickly overflows and gives nonsense, or you use Rational{BigInt} and it becomes extremely expensive. Unless you are doing number theory, stick to floating-point arithmetic.


Yes, I already know that, but there are some fields in mathematics (other than number theory) when You can not replace is equal to zero by is approximately zero ;-).
As I am doing some of those computations I believe doing them rationally is the only thing that guarantees the mathematical correctness of the result. Or maybe it isn’t??


It depends on the kind of calculations you are doing. Another mathematically rigorous method is interval arithmetic; see e.g. the ValidatedNumerics.jl pacakge of which I am an author.


Sounds like an interesting problem, @abulak. Do you have a reference implementation in another language, to compare performance?

In my past research in algebra, performance over the rationals was also a problem. I sometimes ran calculations over finite fields instead. I haven’t looked into it, but I bet Nemo.jl has fast mod-q arithmetic.

@ChrisRackauckas that would be really cool. Is there an issue or PR for delayed reduction of rationals? What do other languages do?


Nope, I’m doing it from scratch;
I tried using symbolics in sage, but wasn’t able even to complete calculations.
(keep in mind that, I’ve just recently started to program)


I believe that’s the purpose of the Ratios.jl package. It’s unlikely to help here with Int, though, since it’ll be even more prone to overflow. Perhaps it’d cut off a little overhead with BigInt. May be worth a shot.


Thanks, Ratios are order of magnitude faster:

SimpleRatio{T<:Integer}(x::Rational{T}) = SimpleRatio(x.num, x.den)

function simple_rationalize{T<:Integer, S<:Real}(::Type{T}, X::AbstractArray{S}; tol::Real=eps(eltype(X)))
    r(x) = SimpleRatio(rationalize(T, x; tol=tol))
    return r.(X)
sr{T<:Integer}(::Type{T}, a; tol=eps()) = simple_rationalize(T, a)

tests (the same randomly generated D and pm):

@time algebra_multiplication(D,D,pm);
D_srBigInt = sr(BigInt, D)
@time algebra_multiplication(D_srBigInt,D_srBigInt, pm);
D_rBigInt = r(BigInt, D)
@time algebra_multiplication(D_rBigInt,D_rBigInt, pm);
  0.000112 seconds (191 allocations: 6.844 KB)
  0.029445 seconds (191.98 k allocations: 13.363 MB, 44.68% gc time)
  0.217483 seconds (562.49 k allocations: 22.916 MB, 6.02% gc time)


I think that a package could improve this a lot more. All operations which do a lot of operations (like dot products and matrix multiplications) could upconvert FastRational{Int} to FastRational{BigInt}, requiring the user to explicitly call a simplify routine, which assumes Rational{Int} will be the output type (or overflow/fail). Of course, this wouldn’t cover all cases, but it would speed up a very useful case (at least for DiffEq-land, I’ve wondered about making this. Otherwise Rational is too slow to be useful in most cases… (though I did find a use for it yesterday)).


It’s true that there can be roundoff errors in floating-point arithmetic, but it’s not true that nothing in floating-point arithmetic is ever exactly zero, or that floating-point numbers cannot represent an exact zero.

Every method of doing arithmetic on computers has its tradeoff. In floating-point arithmetic, you have to be aware of the possibility of roundoff errors. In rational arithmetic, if you are using bignums then it starts out orders of magnitude slower than hardware floating-point arithmetic, and can get rapidly worse if the size of the denominators grows (as it often will in real calculations). With interval arithmetic, for many realistic problems it can produce absurdly pessimistic error bounds.


Sure, I understand all (most?) of that. The thing is, my computations are supposed to be a proof (in mathematical sense) of a theorem. If some expression is to be equal to 0 and I read answer 0.0 how can guarantee that no rounding in the whole process took place? In this sense I feel I can not rely on floating point arithmetic.
So I need a certain algebraic condition (ala f(x)=0) to be satisfied exactly – in this sense I don’t think I can do without exact rationals. (@dpsanders, thanks for the suggestion for interval arithmetic – I was not even aware something like this exists, but I don’t think it applies to my case. Or I have to read more. It definitely would, had I to prove that f(x) > 0)

As a final point: I already do most of my computations in floating point arithmetic.
Only after my solution i sufficiently close I switch to rationals – and after projecting to ℚ I actually have to modify my solution (in ℚ) to satisfy the above f(x) = 0 exactly.

It made a very interesting read, but we strayed quite far from the original question which could be paraphrased:

does computation in rationals require that much allocation (hence slowness), or am I doing something wrong? (given how unexperienced in those matters I am)

The Performance Tips chapter explicitely says that

Unexpected memory allocation is almost always a sign of some problem with your code, usually a problem with type-stability.

Hence I was surprised that code performing well on Floats suffers from “unexpected memory allocation” after switching to rationals…