We could potentially introduce wrapping arithmetic operators (syntax ideas: +%
or %+
?) which indicate intentional overflow and then use them, but yeah, it’s a whole process. And I’m not entirely convinced that it’s actually better. The situation in C is actually rather different than Julia: in C signed integer arithmetic that overflows is undefined behavior, so it’s very valid to have a compiler that warns you about it. In Julia, integer arithmetic is not undefined at all, it’s explicitly defined to wrap around doing correct modular arithmetic, which is a useful and valid mathematical operation.
I realize that I’m a bit unusual in this, but I don’t even think of fixed-size integer types as approximating ℤ, the ring of integers. Instead, I think of them as implementing fully correct arithmetic in the modular ring ℤ/2ᴺℤ. So when I see Int8(123) + Int8(45) == Int8(-88)
it doesn’t seem wrong—that is the correct answer modulo 256, which is what you’ve asked for by using the Int8
type:
julia> mod(123 + 45, 256)
168
julia> mod(-88, 256)
168
From that perspective it’s not only not surprising that 10^1000 == 0
, but it’s actually the only correct answer because if we do 10^{1000} in full precision and then reduce modulo 2^{64} that’s what we get:
julia> mod(big(10)^1000, big(2)^64)
0
Mathematically, the reason the result is zero is because 2 divides 10 and 1000 ≥ 64 so 2^{64} divides 10^{1000}. The same zero result doesn’t happen if your base isn’t divisible by 2:
julia> 3^1000
6203307696791771937
julia> big(3)^1000
1322070819480806636890455259752144365965422032752148167664920368226828597346704899540778313850608061963909777696872582355950954582100618911865342725257953674027620225198320803878014774228964841274390400117588618041128947815623094438061566173054086674490506178125480344405547054397038895817465368254916136220830268563778582290228416398307887896918556404084898937609373242171846359938695516765018940588109060426089671438864102814350385648747165832010614366132173102768902855220001
julia> mod(big(3)^1000, big(2)^64)
6203307696791771937
The result 6203307696791771937 here is the right answer, you just weren’t asking the question you thought you were. Note how different this is from floating-point instability: in the case of floating-point error, the result is truly just wrong, there’s no sense in which it’s correct.
So from my perspective, this whole discussion has a lot of unwarranted “floating-point primacy” assumptions. I.e. that the ways in which floating-point works are “more correct” even though the rounding and loss of precision is rampant. When you’re doing modular integer arithmetic, there is one and only one correct answer and that’s what native integer operations give you. Yes, that answer isn’t what you expect if you think that Int
approximates ℤ, but it is guaranteed to be equal to the answer you’d get in ℤ modulo 2^64. I have a very hard time being convinced that it’s actually wrong for integer types to be defined to do modular arithmetic.