I’m a newbie for Julia. I just test the simple code as follows:
for i in range(10,30)
println(“10^$i=”,10^i)
end
but I get the results:
10^10=10000000000
10^11=100000000000
10^12=1000000000000
10^13=10000000000000
10^14=100000000000000
10^15=1000000000000000
10^16=10000000000000000
10^17=100000000000000000
10^18=1000000000000000000
10^19=8446744073709551616
10^20=7766279631452241920
10^21=3875820019684212736
10^22=1864712049423024128
10^23=200376420520689664
10^24=2003764205206896640
10^25=1590897978359414784
10^26=2537764290115403776
10^27=6930898827444486144
10^28=4477988020393345024
10^29=7886392056514347008
10^30=5076944270305263616
10^31=4570789518076018688
10^32=8814407033341083648
10^33=4089650035136921600
10^34=4003012203950112768
10^35=3136633892082024448
10^36=5527149226598858752
10^37=68739955140067328
10^38=687399551400673280
10^39=6873995514006732800
Is there any explanation for this or this is an error of calculation? Thank you.
Wrong results in calculation
Yes, this is expected behaviour, and is called integer overflow, or wraparound. It is described in the manual here: https://docs.julialang.org/en/v1/manual/integersandfloatingpointnumbers/index.html#Overflowbehavior1
BTW, range(10, 30)
does not work for me. What version of Julia are you on? Normally, you would write 10:30
if you need those values, BigInt is available:
julia> const ten = BigInt(10)
10
julia> for i in 10:30
println("10^$i = ", ten^i)
end
10^10 = 10000000000
10^11 = 100000000000
10^12 = 1000000000000
10^13 = 10000000000000
10^14 = 100000000000000
10^15 = 1000000000000000
10^16 = 10000000000000000
10^17 = 100000000000000000
10^18 = 1000000000000000000
10^19 = 10000000000000000000
10^20 = 100000000000000000000
10^21 = 1000000000000000000000
10^22 = 10000000000000000000000
10^23 = 100000000000000000000000
10^24 = 1000000000000000000000000
10^25 = 10000000000000000000000000
10^26 = 100000000000000000000000000
10^27 = 1000000000000000000000000000
10^28 = 10000000000000000000000000000
10^29 = 100000000000000000000000000000
10^30 = 1000000000000000000000000000000
Most applications (outside of number theory) working with such large values probably want floatingpoint arithmetic: use 10.0 ^ i
rather than 10 ^ i
.
This kind of question comes up periodically, and it is virtually never someone doing number theory who needs BigInt
… it is always a new user just trying out arithmetic and discovering that computers have more than one numeric type.
In real applications, you are typically either counting things in the real universe (bytes, loop iterations, etcetera: use Int
— overflow is not an issue on 64bit machines), are doing approximate calculations (use floating point), or are doing number theory (use BigInt
or similar).
I wonder if OP tried this on python or C++ or really, anything.
It’s not like when you use your table calculator you can get 10^30
with no issue?
Yes, Python’s builtin integer type is guaranteed never to overflow — internally, it automatically switches to a BigInt
like representation when the numbers get too big (Python 3 hides this dual representation from you but it still happens under the hood). Not an issue as long as you don’t expect Python to compile to fast native code.
Doing this in Julia would make things slightly more friendly for the first few hours of using Julia — it would slightly delay having to learn about the existence and tradeoffs of different numeric types — at the price of making virtually everything everywhere vastly slower.
Of course, you run into the same overflow limitations if you are storing integer data in Numpy or Pandas data structures, because these use native fixedprecision integers for performance reasons.
Most scientific calculators have only one numeric type: decimal floating point, so they shouldn’t have a problem with 10^30
.
I want just to set right the sentence “only people in number theory need BigInts”. There are many things slightly wrong with it:

Many mathematicians need exact large numbers. I am in group/representation theory and all the
time you need groups whose order already does not fit in 64 bits: for example GL(V) where V is a vector space of dimension 10 on the field with 2 elements (very useful for coding theory, but also group theory in general). So these numbers are useful in: Representation theory, group theory; combinatorics, coding theory — more generally almost all algebra. 
The BigInt in Julia are not very useful: the integers in Python are actually more useful! Why? well,
the situation for calculations which need BigInts is usually that about 99% of the numbers do actually fit in 64bit, and only very few of them (but important ones) do not fit. But replacing “Int” by “BigInt” in the computation often slows down computations by a factor of 200. By contrast, the big integers in Python, Ruby, Sage, Gap and other mathematically friendly places are only about 5 times slower that Int, because they have been written as a Union{Int,BigInt} where one switches to BigInt only upon overflow. 
So in Julia currently the most useful numbers for a mathematician currently are the SafeInts, which do not impose a large overhead and at least allow to do safely a computation. Until the BigInt have their speed improved as I suggest above they will be useful for no one.
Sure. I’m using “number theory” as a shorthand for branches of pure and applied math (e.g. crypto) dealing with huge number fields. The point is, if you are working in such a field, you know it and you know that you need bignums — it’s totally different from accidental overflow of integers used to count “real” things.
Your denominator here is not the same — working with small integers in Python etc. is nowhere near machine speed. This is precisely why Python can afford to make all integers silently overflow to bigints.
That being said, we know that bigint calculations in Julia are not nearly as fast as handcoded bigint calculations in C, even using the same GMP library. The issue is that very efficient bigint calculations require one to carefully operate inplace on as many values as possible rather than allocating new values for each operation. For example, see the discussion in https://github.com/JuliaLang/julia/pull/10084 and elsewhere … ideally this might require some lowlevel compiler support. One might also want something in native Julia that inlines better, especially for smalltomoderate bignums (e.g. see https://github.com/JuliaLang/julia/pull/17015).
The performance of BigInt
in Julia is not that bad when they are actually big.
Multiplying two numbers with 200 digits in Julia is not slower that in the other systems I mentioned.
The problem is that this is not how the computations mathematicians need present themselves.
As I said, in a typical computation, 99% or 99.9% of the numbers used will actually fit in an Int64 or Int128.
The problem is to safely go on computing when encountering the few “big” numbers, without slowing too much the rest of the computation. For that the technique used by the other languages I mentioned is to
actually have a type which in Julia lingo would be Union{Int,Int128,BigInt}
where the computation will switch to BigInt when necessary and switch back to Int or Int128 when feasible. Using this, the numbers are what I said: take a straightforward arithmetic routine in Python. The Julia version using Int will be on average 510 times faster. On the other hand, the Julia version using BigInt will be sometimes 50 times slower. To solve this, one would have to write the Union type I mention, which currently is beyond
my skills (and probably needs compiler help to be fast).

you can use SafeInts.jl as a nice starting point,

I am not sure what changes you want from the compiler, but I would hazard a guess that with 2 types in
Union
, it would be reasonably fast, and possibly even with 3.
You’ve made this claim several times, but you haven’t provided any evidence yet. If you are doing exact math (“number theory” etc.) on a number field that doesn’t fit in Int
, in a real application I’m guessing you spend much of your time on larger numbers, especially since bignum operations on larger numbers are slower. This is certainly true in crypto and coding theory and computational number theory; can you give a practical counterexample?
Obviously, if you take (“number theory” bignum calculations) / (all integer calculations in all code everywhere), the fraction will be close to zero, but I don’t think that is the appropriate comparison.
I have quite a few algorithms I programmed which behave as I claim. I showed in the forum
when I was learning Julia last year two toy examples:
Computing the inverse of a Hilbert matrix:
Julia slow with Hilbert matrices
Computing the collatz function:
A plea for int overflow checking as the default
I have many serious programs behaving the same way, but they are a bit too technical to give in this forum
In your example, > 99% of the final numbers exceed typemax(Int)
, so it seems like bignum calculations will inevitably dominate the calculation time even if you had a mixedprecision type. And this is a toy example — in real research problems with the Hilbert matrix, you would presumably be working with even larger numbers.
In your posted example all of the calculations fit into Int
. And in a real (nontoy) application where you need to calculate this function on bignums, presumably the situtation is different, but I’d guess that in this case the computation time is dominated by the bignumrequired arithmetic.
(Of course, you can contrive corner cases where it just barely hits the boundary where you need to switch over to bignums, but I’m skeptical that such cases are the norm in real research.)
It seems likely to me that the real issue here is not needing a hybrid Int
/BigInt
type but that Julia’s compiler needs to be better about reusing BigInts
that don’t escape a computation. Currently, since BigInts
are treated as being immutable, each new BigInt
value is allocated afresh and the memory for old BigInts
that are no longer used needs to be reclaimed by the garbage collector. This can add a lot of overhead. Slower dynamic languages, on the other hand, often use reference counting for memory management, which is generally much slower than what Julia does, but it does make it trivial to reuse BigInts
that have a reference count of zero, which is probably why you’re seeing better BigInt
performance in these dynamic languages.
If I’m right, then Int
becoming a hybrid BigInt
type would be a terrible disappointment to you since it wouldn’t solve the performance problem you’ve observed, it would just make standard Int
operations slower and harder to reason about. What is actually needed to fix the issue is better escape analysis and a scheme for BigInt
allocation that is visible to Julia’s compiler unlike the current one which is completely opaque to it.
Another way to put this is that I suspect you’ve misinterpreted some data about programming languages by concluding that there is cause and effect between these two observations:
 Interpreted languages often use hybrid
Int
/BigInt
implementations  Interpreted languages tend to have better performance for
BigInt
heavy workloads
You’ve concluded that 1 causes 2 because 1 and 2 tend go together. I don’t think that’s the right causality graph though. There’s a different, much more subtle causality graph starting from the common root of “slowness of interpreted languages”:
 Interpreted languages are slow (especially at native
Int
operations)  Therefore there’s no real harm in using hybrid
Int
/BigInt
implementations  So interpreted languages often use hybrid
Int
/BigInt
implementations
At the same time:
 Since these languages are slow anyway, they’re also likely to use reference counting
 Reference counting happens to be quite good for
BigInt
heavy workloads  So interpreted languages tend to have better performance for
BigInt
heavy workloads
What you’re observing then, is that interpreted languages to tend to exhibit both 1 and 2—they have hybrid integer types and better performance on BigInt
heavy workloads—but 1 does not cause 2, they’re both caused by language slowness. By this theory, making integers hybrid in Julia would not fix the performance issues with BigInt
heavy workloads, it would only cause Int
operations to become slower and more complicated.
Actually this example perfectly illustrates my point when you run it for “real” problems: In my example I ran it up to 10^7 in order not to tire readers, but in practice I ran it up to 10^10, that is overnight.
Then there is about 1/1000000 of the computations where intermediate values overflow Int64 (I actually managed to do them in Julia since they do not overflow SafeInt128).
And I really have quite a few computations which behave exactly the same. I agree on the other hand that the Hilbert matrices are different.
It may be that my diagnostic is wrong. Nevertheless, it is the case that Magma can invert Hilbert matrices 35 times faster than Julia. It is desirable to understand how they manage that. The only thing I can tell is that they do not have a special algorithm for Hilbert matrices, the speedup is observed for all Rational matrices…