Surprise with Rational

julia> (2^62)//(2^62+1) * 2
ERROR: OverflowError: 4611686018427387904 * 2 overflowed for type Int64

julia> float((2^62)//(2^62+1)) * 2
2.0

One can overflow the numerator or denominator without even reaching a large number, so overflow-to-infinity is difficult (impossible?) for Rational.

It doesn’t, but you mean you would have liked it. That likely would slow down all other operations besides division (//), but might have been a good option for debugging. NaN is kept in Posit number system (there named NaR, Not a Real), only one bit-pattern used, but Inf is gone there, so I thought Infs possibly also less useful for rations.

Sorry you’re right, they get converted (I thought they didn’t, that might have been one option to construct them):

julia> a = -2; a // 0
-1//0

Speed was never something Julia’s Rational type offered anyway. Almost every operation requires attempting to reduce the ratio to canonical form. Comparatively, I would anticipate that a check for 0//0 would have a small impact.

For what it’s worth, I never really use Rational arithmetic. I don’t have a horse in this race. Down-weight my opinions accordingly.

3 Likes

I think Rationals could be much faster actually (also BigInt, in case you want to base them on that; BigInt is basically the default in Python, and not too slow, it’s not the main reason for Python’s slowness). Canonical form is used yes, but seems not needed, and that would be the key to make it faster. Then comparisons get slower, but they are less used.

Raku (formerly known as Perl 6) has rationals as the default number type, so I would look into what they do right (or not), or differently to et speed:

1 Like

In practice, both Inf and NaN are really only ever encountered when dividing by zero (save for the rare occasion when incrementing above 1.7976931\times10^{308}), and taken together they provide computational benefits, allowing you to check for errors only after the chain of computations is done; it’s an incoherent policy to support one and not the other.

Likewise for 1//0 and 0//0.

Here’s yet another example of how incoherent the current policy is: 1//(1//0) gives 0//1, but 1//(im//0) throws an error.

A couple of reasons.

One pragmatic reason is that when I originally wrote the Rational code, the ±Inf cases could mostly use the same logic as for finite rationals if you’re a little clever about it. When I tried to handle 0//0 as well it made everything annoyingly complex and didn’t “fall out” of the normal logic at all. That made us consider whether handling 0//0 was useful in the first place, which brings us to the other reason


It would be better for most applications if 0/0 raised an immediate error rather than producing NaN and letting that poison your computation. The reason it doesn’t work like that was that many of the people debating the IEEE 754 design in the 1980s thought it would have been too slow, so we ended up with a weird compromise where there’s two kinds of NaNs—quiet and signaling—where the latter are supposed to throw errors when produced, but don’t actually because no one ever actually implemented support for it in programming languages, so now all NaNs are effectively quiet. Since Julia’s Rationals are really slow anyway, raising an error is not an issue, so that’s what they do.

In short, handling 0//0 as NaN is annoying and useless. So we don’t.

9 Likes

Hah, fair enough! I suppose the coherence will come from this:

  • Floats support both
  • Ints support neither
  • Rationals meet halfway

This one feels strange; do you know what’s going on?

1/(1//0 + 0im) gives 0//1 + 0//1*im, but 1//(1//0 + 0im) throws an error.

:pray:

1 Like

Inf doesn’t have any computational benefits for rationals, for a series of calculations, since no calculation will produce such, unless you start with Inf (and why would you want them in your input data?).

I’m totally fine with the “incoherent policy”, since also neither is obviously better, throwing an error or propagating it. I can see it bad NOT stopping right away, imagine spending lots of power, time, on a supercomputer, then seeing NaNs (or Inf where it applies; for IEEE).

For debugging, I guess running to the end, then checking for them could be helpful (or looking for a needle in a haystack).

The good thing about Julia is that none of the built-in types have much advantage over types in a package (other than being the go-to type), so you can make your own rational type with signaling properties.

NaNs in IEEE were meant to be signaling, carrying a payload (Infs do not, only having one bit-pattern, for each, Inf and -Inf), I guess helpful to point to where the error happened IF implemented, but no language (or at least major one) does it by default (or even as opt in, that I know of), so I don’t see it as very helpful to get NaNs in the end. But the IEEE float code will be faster not having to check at runtime and have (precise) exceptions.

While I was wrong on 2//0 having a different bit-pattern than 1//0, both meaning Inf, that would be conceivable and the different number being a payload, but then only possible for Infs as opposed to NaNs (and IEEE), which rationals don’t have, nor an easy way to add


If z = 1//0 + 0im then

  • 1/z is computed as inv(z), (see complex.jl:279) which has the code
function inv(z::Complex)
    c, d = reim(z)
    (isinf(c) | isinf(d)) && return complex(copysign(zero(c), c), flipsign(-zero(d), d))
    complex(c, -d)/(c * c + d * d)
end

that is, the second line of the function handles the zero denominator.

  • 1//z is computed, on the other hand, as conj(z)//abs2(z) (see rational.jl:79), which is really computing (1//0)//(1//0) and hence does integer 0/0 division when trying to reduce a rational to lowest/canonical form.