Apparently this is because factorial(n) = gamma(float(n)+1). I guess I just thought this was a neat little reminder that in many circumstances, the kinds of functions that can actually produce huge integers will still overflow because they involve floats somewhere (a fact which, of course, cannot be relied on in general).

(By the way, I imagine the mathematicians in the crowd will say factorial should be changed in Base because the \Gamma function is only one of infinity possible analytic continuations of factorial, so factorial should throw an error on non-integer arguments.)

No, factorial(x::Int) does not use that method; in fact, it just uses a table lookup. It threw an OverflowError because the result is not representable as an Int, and this function is high-level enough (unlike primitive operations like +) that the overhead of an overflow check is negligible.

To use floating point, you have to call factorial(21.0). This does not overflow, and correctly gives the (approximate) answer 5.109094217170944e19.

(The Γ function is the standard extension of the factorial function to arbitrary real and complex arguments, so it makes a lot of sense for this to be the default.)

I am looking at the possibility to convert software for Lie theory and Group theory written in languages like Gap, Magma, Maple to Julia. The possibility of an overflow is just an absolute no for such software. Very large integers occur naturally all over the place in such software (like the size of the monster simple group) and approximate computation is useless.

Thus, for such software only BigInt is possible. The problems using BigInt are of two kinds:

-very slow. But why could not this improved, using checked Int64 for implementing small BigInts and converting when
necessary? I would be happy if small BigInts where, say, only 5 times slower than Int64.

-cumbersome way of inputting constants. One would need a mode or a macro to interpret all integer constants as BigInt.
I do not know yet Julia well enough to see if this is possible currently.

BigInt speed in Julia could certainly be improved. See e.g. Nemo.jl for I think a faster alternative (IIRC?), and also https://github.com/JuliaLang/julia/issues/4176. Patches welcome. But it’s not so trivial as you seem to think. You certainly can’t just make Int64 itself automatically overflow to BigInt without killing performance everywhere (since lots of things in Julia use Int arithmetic).

A macro to reinterpret all integer literals as BigInt is actually quite trivial to write. A REPL mode to do this is also possible.

While such a thing would be really cool, I guess it would be quite a daunting change.

The obvious memory layout for a maybeBigInt int would be to store the value if it fits in 63 bits, and otherwise set the first bit and have the lower 63 bits a union of flags and a pointer/reference to a heap-allocated bigint. Say, e.g., the next 7 bit store the length, and the remaining bits are pointer; if we have a SmallBigInt smaller than ca 2**13 bytes then we can store it directly on the heap, if it is larger then we store a BigInt reference (thanks to virtual memory we should get away with using a lot of bits in pointer types for flags).

One would pay on each arithmetic operation, same as in all checkedInt schemes, and promote when we overflow.

However, one would need to teach the garbage collector to properly follow these references in Array{maybeBigInt}; I’m not sure how hard this would be, but I guess this would not be easy.

Anyone who knows the internals who can say how impossible this would be?

In general: Please no, keep the current behaviour. I want a language that generates “least surprise” assembly without me being a language lawyer (C+±style), and then optimizes the hell out of it (because llvm understands my processor better than me). Int64/Int32/Int16/Int8 addition / comparison / etc should always map to the obvious machine instructions. Python gets away with a different approach because everything is slow anyway (except external library calls, and then behaviour is hard to anticipate). This is easier to remember for everyone (how many programming languages do you interact with vs how many processor architectures / number models?).

More generally, would it be possible to have some equivalent of type inference for Integer and float literals? Literals are generally one of the bits of Julia that I find most annoying.

For example, if I write x = 101, it would be great if 101 parsed as an int16 if x known to be a 16 bit integer, or a 64 bit integer if x is known to be a 64 bit integer.

Maybe create a special type called “Literal” which promotes to other numeric types as soon as it is used in some other compound expression? So that I can write y = 1 - x instead of y = one(x) - x . In that system, the 1 would be of type NumericLiteral{“1”}. Whenever it is used in any primitive numeric operation, it would promote to the type of the other argument, or to some default type. This would also make it a lot harder to write code that isn’t type stable.

To add some data to the discussion, the following simple program:

# naive code to find the maxima of the collatz function
# test for example with lim=10^6
function collatz(lim)
max=1
i=1 # becomes 250 times slower if i=BigInt(1)
while i<lim
i+=1
c=0
n=i
while n>1
if n%2==0
n>>=1
else
n=3*n+1
end
c+=1
end
if c>max
max=c
println("at $i new max=$max")
end
end
end

Is 250 times slower with BigInt. The speed with Int64 is about 2-3 times slower than C++. With BigInt it is about 3 times slower than the interpreted language Gap (where ints are BigInts by default). This makes porting programs less attractive. Is there anything I can fix in the above code?

It’s not because of https://github.com/JuliaLang/julia/issues/8188 (I don’t know if there’s anything left to be done there…) but because of the difference between signed and unsigned division.

I think you should look at the Safer Integers package. I tried it out with your code using SafeInt64s instead of Int64s and got a slowdown on the order of 2.5x which is much better than the 250x you had with bigInt!

It’s the Jones formula for outputting the nth prime number If you tried that with fixed Int’s you would very quickly overflow at p(5) already, even though the 5th prime is very small, the calculations to obtain it use extremely large integers and factorials that overflow.

AFAICT the issue is not that people deny the relevance of overflow in specific situations (especially in specific fields like computational number theory). It is just that the semantics in Julia for standard IntXs have been chosen because of practical considerations.

People doing number theory know they need bigints. Int is for a separate sphere of applications where you are counting real things (elements of an array, loop iterations, bytes, …). It’s not practical to use the same integer type in both cases if you care about performance.

It’s just a fun example for others to play around with, if they were lacking an example (using this function as an example here is just about the only practical application I can think of for it). Even though BigInt’s are much slower, this Julia function actually is many orders of magnitude faster than Mathematica’s arbitrary precision integers (the Mathematica version of this code basically chokes and doesn’t finish if you try to find the 17th prime or so), so even though you get a performance hit, Julia’s native BigInts are still much better than what other languages offer.

Have you looked into Safer Integers? They’re quite fast and they’ll tell you if there’s an overflow. The linked package comes with a plethora of different SafeInt types of different bit sizes.