A plea for int overflow checking as the default

By the way, interesting fact:

factorial(21)
ERROR: OverflowError()
...

Apparently this is because factorial(n) = gamma(float(n)+1). I guess I just thought this was a neat little reminder that in many circumstances, the kinds of functions that can actually produce huge integers will still overflow because they involve floats somewhere (a fact which, of course, cannot be relied on in general).

(By the way, I imagine the mathematicians in the crowd will say factorial should be changed in Base because the \Gamma function is only one of infinity possible analytic continuations of factorial, so factorial should throw an error on non-integer arguments.)

No, factorial(x::Int) does not use that method; in fact, it just uses a table lookup. It threw an OverflowError because the result is not representable as an Int, and this function is high-level enough (unlike primitive operations like +) that the overhead of an overflow check is negligible.

To use floating point, you have to call factorial(21.0). This does not overflow, and correctly gives the (approximate) answer 5.109094217170944e19.

(The Γ function is the standard extension of the factorial function to arbitrary real and complex arguments, so it makes a lot of sense for this to be the default.)

8 Likes

I am looking at the possibility to convert software for Lie theory and Group theory written in languages like Gap, Magma, Maple to Julia. The possibility of an overflow is just an absolute no for such software. Very large integers occur naturally all over the place in such software (like the size of the monster simple group) and approximate computation is useless.

Thus, for such software only BigInt is possible. The problems using BigInt are of two kinds:

-very slow. But why could not this improved, using checked Int64 for implementing small BigInts and converting when
necessary? I would be happy if small BigInts where, say, only 5 times slower than Int64.

-cumbersome way of inputting constants. One would need a mode or a macro to interpret all integer constants as BigInt.
I do not know yet Julia well enough to see if this is possible currently.

1 Like

BigInt speed in Julia could certainly be improved. See e.g. Nemo.jl for I think a faster alternative (IIRC?), and also BigInt and BigFloat performance is poor · Issue #4176 · JuliaLang/julia · GitHub. Patches welcome. But it’s not so trivial as you seem to think. You certainly can’t just make Int64 itself automatically overflow to BigInt without killing performance everywhere (since lots of things in Julia use Int arithmetic).

A macro to reinterpret all integer literals as BigInt is actually quite trivial to write. A REPL mode to do this is also possible.

2 Likes

While such a thing would be really cool, I guess it would be quite a daunting change.

The obvious memory layout for a maybeBigInt int would be to store the value if it fits in 63 bits, and otherwise set the first bit and have the lower 63 bits a union of flags and a pointer/reference to a heap-allocated bigint. Say, e.g., the next 7 bit store the length, and the remaining bits are pointer; if we have a SmallBigInt smaller than ca 2**13 bytes then we can store it directly on the heap, if it is larger then we store a BigInt reference (thanks to virtual memory we should get away with using a lot of bits in pointer types for flags).

One would pay on each arithmetic operation, same as in all checkedInt schemes, and promote when we overflow.

However, one would need to teach the garbage collector to properly follow these references in Array{maybeBigInt}; I’m not sure how hard this would be, but I guess this would not be easy.

Anyone who knows the internals who can say how impossible this would be?

In general: Please no, keep the current behaviour. I want a language that generates “least surprise” assembly without me being a language lawyer (C+±style), and then optimizes the hell out of it (because llvm understands my processor better than me). Int64/Int32/Int16/Int8 addition / comparison / etc should always map to the obvious machine instructions. Python gets away with a different approach because everything is slow anyway (except external library calls, and then behaviour is hard to anticipate). This is easier to remember for everyone (how many programming languages do you interact with vs how many processor architectures / number models?).

2 Likes

More generally, would it be possible to have some equivalent of type inference for Integer and float literals? Literals are generally one of the bits of Julia that I find most annoying.

For example, if I write x = 101, it would be great if 101 parsed as an int16 if x known to be a 16 bit integer, or a 64 bit integer if x is known to be a 64 bit integer.

Maybe create a special type called “Literal” which promotes to other numeric types as soon as it is used in some other compound expression? So that I can write y = 1 - x instead of y = one(x) - x . In that system, the 1 would be of type NumericLiteral{“1”}. Whenever it is used in any primitive numeric operation, it would promote to the type of the other argument, or to some default type. This would also make it a lot harder to write code that isn’t type stable.

2 Likes

To add some data to the discussion, the following simple program:

# naive code to find the maxima of the collatz function
# test for example with lim=10^6
function collatz(lim)
  max=1
  i=1    # becomes 250 times slower if i=BigInt(1)
  while i<lim
     i+=1
     c=0
     n=i
     while n>1
       if n%2==0 
         n>>=1
       else 
         n=3*n+1
       end
       c+=1
     end
     if c>max
       max=c
       println("at $i new max=$max")
     end
  end
end

Is 250 times slower with BigInt. The speed with Int64 is about 2-3 times slower than C++. With BigInt it is about 3 times slower than the interpreted language Gap (where ints are BigInts by default). This makes porting programs less attractive. Is there anything I can fix in the above code?

Have you tried Nemo? It uses the Flint bigint library and I believe it does some tricks to make its bigints more efficient than the standard ones.

Try changing n%2 to n & 2 … it looks like you’re getting bitten by faster div/mod/rem thoughts [llvm] · Issue #8188 · JuliaLang/julia · GitHub, and changing that line speeds it up by a factor of 4 on my machine.

Yes indeed with n&2 it goes to C speed… but then it hangs with BigInt
It could be a bug in Julia?

I have a problem Nemo on my computer (with Julia 0.6, on ubuntu). It complains that

error compiling init: could not load library “/home/jmichel/.julia/v0.6/Nemo/local/lib/libflint”

while I do have flint installed on my ubuntu machine…

I managed to install Nemo on another machine. Replacing this time 1 by ZZ(1):

-It does not support n&1, I had to use n%2
-It is even slower than BigInt (by about 15%)

Should be n & 1, not 2.

It’s not because of https://github.com/JuliaLang/julia/issues/8188 (I don’t know if there’s anything left to be done there…) but because of the difference between signed and unsigned division.

Yes you are right. I noticed it also that it should be n&1. Now BigInt version does not hang but is still 250 times slower

I think you should look at the Safer Integers package. I tried it out with your code using SafeInt64s instead of Int64s and got a slowdown on the order of 2.5x which is much better than the 250x you had with bigInt!

5 Likes

Many people have been asking for examples of cases where you get overflow.

This isn’t something I necessarily need, but I do happen to have an example at hand on gist:

g(y::BigInt,x::BigInt) = BigInt((abs(y-x) + y-x)//2)
r(y::BigInt,x::BigInt)::BigInt = (x==0) ? y : mod(y,x)
p(n::Int) = sum(i->g(BigInt(1),g(sum(j->r((factorial(g(BigInt(j),BigInt(1))))^2,BigInt(j)),0:i),BigInt(n))),0:n^2)
map(p,1:17)

It’s the Jones formula for outputting the nth prime number If you tried that with fixed Int’s you would very quickly overflow at p(5) already, even though the 5th prime is very small, the calculations to obtain it use extremely large integers and factorials that overflow.

Try p(5) without BigInt

g(y::Int,x::Int) = Int((abs(y-x) + y-x)//2)
r(y::Int,x::Int)::Int = (x==0) ? y : mod(y,x)
p(n::Int) = sum(i->g(1,g(sum(j->r((factorial(g(j,1)))^2,j),0:i),n)),0:n^2)

However, it does seem to detect the overflow in 0.6

julia> p(5)
ERROR: OverflowError()
1 Like

AFAICT the issue is not that people deny the relevance of overflow in specific situations (especially in specific fields like computational number theory). It is just that the semantics in Julia for standard IntXs have been chosen because of practical considerations.

1 Like

People doing number theory know they need bigints. Int is for a separate sphere of applications where you are counting real things (elements of an array, loop iterations, bytes, …). It’s not practical to use the same integer type in both cases if you care about performance.

3 Likes

It’s just a fun example for others to play around with, if they were lacking an example (using this function as an example here is just about the only practical application I can think of for it). Even though BigInt’s are much slower, this Julia function actually is many orders of magnitude faster than Mathematica’s arbitrary precision integers (the Mathematica version of this code basically chokes and doesn’t finish if you try to find the 17th prime or so), so even though you get a performance hit, Julia’s native BigInts are still much better than what other languages offer.

1 Like

Have you looked into Safer Integers? They’re quite fast and they’ll tell you if there’s an overflow. The linked package comes with a plethora of different SafeInt types of different bit sizes.

2 Likes

AMAZING! Thank you so much @Mason!! This is precisely the kind of solution I’ve been seeking!

[edit: add Mason quote]