I’m defining an algebra structure on ℝⁿ, i.e. apart from standard (vector) addition I want to multiply vectors. The structure of vector mutliplication is encoded in (Cayley-) table p_m::Array{Int,2}, i.e. pm[i,j] = k when eᵢ⋅eⱼ = eₖ. The full function computing the product is
function algebra_multiplication{T<:Number}(X::Vector{T}, Y::Vector{T}, pm::Array{Int,2})
result = zeros(X)
for (i,x) in enumerate(X)
if x != 0
for (j, index) in enumerate(pm[i,:])
if Y[j] != 0
result[index] += x*Y[j]
end
end
end
end
return result
end
Can you show the code you’re using to time things? It’s not obvious to me why Rational{Int} is so slow. The slowness of Rational{BigInt} seems a lot less surprising.
It’s because it does a reduction at every step. Even if you write it as A*B with rational matrices it will be very flow. What we need is the ability to compute all of the elements and then, after the full multiplication, reduce each element once.
Ok, it turns out that this multiplication is the bottleneck, the very good results above were due too simplistic example (i.e. vector D) chosen (which was very close to Int).
A real example has (the same code, different D) the following timing:
@time algebra_multiplication(D,D,pm);
D_rBigInt = r(BigInt, D)
@time algebra_multiplication(D_rBigInt,D_rBigInt, pm);
D_rInt = r(Int, D)
@time algebra_multiplication(D_rInt, D_rInt, pm);
0.000111 seconds (248 allocations: 173.313 KB)
0.138465 seconds (959.79 k allocations: 25.889 MB, 33.71% gc time)
OverflowError()
in +(::Rational{Int64}, ::Rational{Int64}) at ./rational.jl:200
in algebra_multiplication(::Array{Rational{Int64},1}, ::Array{Rational{Int64},1}, ::Array{Int64,2}) at ./In[8]:7
I think it’s okay (although other people might have better suggestions). But I think you’ll be happy to know that in julia 0.6 broadcasting has become even more awesome, so you won’t need your custom function at all!
D is a sparsely populated vector in ℝⁿ (or later -ℚⁿ).
The real example is supported on a couple hundreds of dimensions (coordinates) vector in ℝⁿ (n~5000); (hence checks against zeros) so there are only a couple hundreds actual multiplications and and index-lookup happening in the function.
Elements of D are given with precision 1e-7 (but just checked that the actual precision does not have impact on the calculations – the D above (with the very favourable timings) has support of couple of tens, not hundreds, sorry for that);
I can upload somewhere the actual D and pm if You want to investigate.
I think the request for D is to make your code reproducible. If it’s reproducible, it’s easier for people to test out your problem. It can be dummy data generated to show the problem.
Doing lots of computations with rationals is not terribly practical because the size of the denominator tends to grow rapidly. Either you use Rational{Int} and it quickly overflows and gives nonsense, or you use Rational{BigInt} and it becomes extremely expensive. Unless you are doing number theory, stick to floating-point arithmetic.
Yes, I already know that, but there are some fields in mathematics (other than number theory) when You can not replace is equal to zero by is approximately zero ;-).
As I am doing some of those computations I believe doing them rationally is the only thing that guarantees the mathematical correctness of the result. Or maybe it isn’t??
It depends on the kind of calculations you are doing. Another mathematically rigorous method is interval arithmetic; see e.g. the ValidatedNumerics.jl pacakge of which I am an author.
Sounds like an interesting problem, @abulak. Do you have a reference implementation in another language, to compare performance?
In my past research in algebra, performance over the rationals was also a problem. I sometimes ran calculations over finite fields instead. I haven’t looked into it, but I bet Nemo.jl has fast mod-q arithmetic.
@ChrisRackauckas that would be really cool. Is there an issue or PR for delayed reduction of rationals? What do other languages do?
Nope, I’m doing it from scratch;
I tried using symbolics in sage, but wasn’t able even to complete calculations.
(keep in mind that, I’ve just recently started to program)
I believe that’s the purpose of the Ratios.jl package. It’s unlikely to help here with Int, though, since it’ll be even more prone to overflow. Perhaps it’d cut off a little overhead with BigInt. May be worth a shot.
I think that a package could improve this a lot more. All operations which do a lot of operations (like dot products and matrix multiplications) could upconvert FastRational{Int} to FastRational{BigInt}, requiring the user to explicitly call a simplify routine, which assumes Rational{Int} will be the output type (or overflow/fail). Of course, this wouldn’t cover all cases, but it would speed up a very useful case (at least for DiffEq-land, I’ve wondered about making this. Otherwise Rational is too slow to be useful in most cases… (though I did find a use for it yesterday)).
It’s true that there can be roundoff errors in floating-point arithmetic, but it’s not true that nothing in floating-point arithmetic is ever exactly zero, or that floating-point numbers cannot represent an exact zero.
Every method of doing arithmetic on computers has its tradeoff. In floating-point arithmetic, you have to be aware of the possibility of roundoff errors. In rational arithmetic, if you are using bignums then it starts out orders of magnitude slower than hardware floating-point arithmetic, and can get rapidly worse if the size of the denominators grows (as it often will in real calculations). With interval arithmetic, for many realistic problems it can produce absurdly pessimistic error bounds.
Sure, I understand all (most?) of that. The thing is, my computations are supposed to be a proof (in mathematical sense) of a theorem. If some expression is to be equal to 0 and I read answer 0.0 how can guarantee that no rounding in the whole process took place? In this sense I feel I can not rely on floating point arithmetic.
So I need a certain algebraic condition (ala f(x)=0) to be satisfied exactly – in this sense I don’t think I can do without exact rationals. (@dpsanders, thanks for the suggestion for interval arithmetic – I was not even aware something like this exists, but I don’t think it applies to my case. Or I have to read more. It definitely would, had I to prove that f(x) > 0)
As a final point: I already do most of my computations in floating point arithmetic.
Only after my solution i sufficiently close I switch to rationals – and after projecting to ℚ I actually have to modify my solution (in ℚ) to satisfy the above f(x) = 0 exactly.
It made a very interesting read, but we strayed quite far from the original question which could be paraphrased:
does computation in rationals require that much allocation (hence slowness), or am I doing something wrong? (given how unexperienced in those matters I am)
The Performance Tips chapter explicitely says that
Unexpected memory allocation is almost always a sign of some problem with your code, usually a problem with type-stability.
Hence I was surprised that code performing well on Floats suffers from “unexpected memory allocation” after switching to rationals…