This is not an issue with Julia. It’s a property of IEEE 64-bit floating-point numbers. The same thing happens in any language that uses standard floating-point numbers.
Thanks. I understand that an overflowed float number like 10.0^50 is represented by Inf, which is reasonable. But I was wondering why an overflowed integer 10^50 is represented by 0. Why isn’t it also Inf? I am a little curious.
Thanks. I understand that Julia does not track the overflow of big numbers due to the efficiency considerations. But I was just wondering why overflowed integer and overflowed float number give different results? Shouldn’t both be Inf?
typeof(Inf) is Float64. Or to put it another way, the IEEE floating point data type defines bitstrings that represent concepts like Infinity and NaN, whereas the Int data type can only contain an integer and has no way to represent Infinity
No, real numbers and integers are very different number systems, and the computer approximations of these numbers systems are different and behave differently. Floating-point numbers have Infs and NaNs; computer integers do not.
julia> 10^50
-5376172055173529600
64-bit IEEE computer integers actually implement \mathbb{Z}/n, the integers mod n, for n=2^{64}. This number system is closed under arithmetic operations. The above calculation is correct for \mathbb{Z}/n, n=2^{64}.
One might devise a finite computer integer type that behaved like your expectations. But it would not be as mathematically rational as the integers mod n. I’m not a hardware person but I suspect a finite integer system that produced infs on overflow would necessarily be less efficient than one that straightforwardly implements \mathbb{Z}/n.
On our real existing PC hardware, the circuitry for wrapping integer arithmetic is present and paid-for and is optimized to death, while circuitry for saturating integer arithmetic didn’t make the cut.
That being said, saturating integer arithmetic is not too exotic, afaiu plenty of micro-controllers and special purpose hardware have circuitry for that. Heck, lot’s of CPU internals use saturating integer arithmetic (eg branch history tables).
I didn’t realize any particular wrapping behavior was needed, I thought it just came as a result of how any integer arithmetic operation worked. Is overflow not intrinsically a more complex operation?
Floating point types, including Julia’s, generally follow (or very closely) the IEEE-754 standard, which is designed to set some bit patterns to Inf among other things. Integer types generally do not; every bit pattern is an integer value, and an operation going beyond the type’s range results in underflow or overflow to the wrong values.
Not always. First, you should be informed that integer literals (a single value you type) can be parsed as different types depending on how big they are. On my system, the smaller ones are 64-bit, at some point it parses as 128-bit for the larger range, and at extremes it parses as arbitrary-precision integers:
Obviously some overflow occurred, but not to 0!. Which brings me to the other way integers can be parsed differently; on 32-bit systems, small integers are parsed to 32-bit integers, which do overflow to 0 for your case:
julia> Int32(10)^Int32(50)
0
On 32-bit systems, Julia will also parse to progressively larger-range integer types (64-bit, 128-bit, arbitrary precision) for larger integer literals. Literals with a decimal point will parse to 64-bit floating point regardless of the magnitude or system. Integers get the special treatment because of how the CPU processes data at once (a word) and their usage in memory addresses; Julia does this with the type aliases Int/UInt being assigned to Int32/UInt32 or Int64/UInt64 depending on the system
julia> FixedPointNumbers.Fixed{Int8,0}(10) ^ 2
100.0Q7f0
julia> ans + ans
-56.0Q7f0
julia> FixedPointNumbers.Fixed{Int8,0}(200)
ERROR: ArgumentError: Q7f0 is an 8-bit type representing 256 values from -128.0 to 127.0; cannot represent 200