Tonight I hated julia, REPL and overflows

It seems, you miss some information: CPU manufacturers have split from CPU designers.


julia> pb(n) = 1 - prod(((365-n+big(1//1)):big(365//1))./365)
pb (generic function with 1 method)

julia> pb(25)

julia> pb(365)

julia> pb(366)

julia> Float64(1-pb(365))

Global constants are for typed variable creation in global scope. Here, an example

julia> const a = big"100"

julia> a = 10
ERROR: invalid redefinition of constant a
 [1] top-level scope at REPL[2]:1

julia> a = big"10"
WARNING: redefining constant a

And if you input a big n value, you’ll get what you want.

julia> p(n) = 1 - prod(365:-1:(365-n+1)) / 365^n
p (generic function with 1 method)

julia> p(a)

julia> p(big"366")

julia> p(big"25")

It is better to show what you used but got error. Then, you get useful understanding from more knowledgeable person here, after all, this place is for that. Maybe, you mistakenly used older syntax from before 1.0 era or something else. At this, point we only have guesses that can surely be wrong.

1 Like

I think the OP may be referring to this

julia> x::Int = 1
ERROR: syntax: type declarations on global variables are not yet supported
 [1] top-level scope at REPL[1]:1

That’s what I assumed and I mentioned it in the first post.


This approach is not only used in common Lisp, but in all languages mentioned where Ints do not overflow: python, ruby, mathematica, magma, gap … Comparatively, BigInts are very slow in julia
(at least 50 times slower). It would be really useful if someone implemented such a type in julia (unfortunately, still beyond my skills)


struct BigIntOnDemand

almost do this? Not with packed bits but a union tag, but it is very close.


You would need Union{Int,Int128,BigInt} for optimal efficiency. And where the skill is needed is writing the arithmetic operations/overflow detection and handling for such a type so that they are fast.

It was mentioned before. It’ll be very hard or impossible to get full speed of Int from this (mostly because of vectorization). It will be a good implementation of BigInt although I thought GMP is doing that already. Also, as someone (I believe it was @stevengj) pointed out last time, for most problems where you truely need BigInt, most of the time you won’t benefit from this optimization since you spend most time when the number is actually big.

You probably don’t want that since it makes the storage unnecessarily large. You’d want the value to either be represented as an inline Int or a pointer to a heap-allocated BigInt. And the garbage collector would need to understand which was which. This could potentially be hacked into the system in C code as a special type, but that’s pretty unsatisfying, which is why no one has done it. The “right” way to do this instead would be to enable efficient inline/pointer unions like that as user-defined types and then define a hybrid BigInt type that way. And of course, you’d also want to teach the compiler that it can reuse heap-allocated BigInt values if it’s sure that no one else has a reference—otherwise you’ll have better performance until values get large but then have a sudden drop-off as soon as they get too big. But I’m sure you can see why all of this hasn’t been done yet.


I would argue the opposite: when I need BigInt in a mathematical computation, usually only a few of the numbers involved are too big for Int. I have many examples which run faster in ruby/gap/magma because I have to run them with BigInt in Julia while the other systems integer type are sometimes 50 times faster than BigInt (which is still 5 times slower than Int, which is why I approve the Julia decision to default to Int; in Julia BigInt is 250 times slower than Int).

Yes I know this is difficult. That’s why I said my skills are yet unsufficient.

you have to :heart: the Julia community.

user makes somewhat inflammatory post.

Julia community responds with reasonable answers/comments to said post and then proceeds with in-depth discussion of efficient big-num/big-int implementation.



…and we lost him :wink:


What I meant is that if you are running a single piece of code multitple times and that line of code either needs or not need BigInt, then that line of code will need BigInt way more often than not. This is simply because there are way more BigInt than Int. If you are running a piece of code only once (not in a loop and not with different inputs), then the performance of it doesn’t matter.

Of course it’s entirely possible that only a small portion of your code needs BigInt and you should limit your use of BigInt to those few lines of code.


Normally, big numbers appears in divisions (by another big number) so an alternative approach would be to contain those divisions inside functions.

This is not the way it goes. A typical example I had recently is when compute the size of a set, about 10^9 intermediate computations did fit with in Int64 or Int128 and only about 30 did not fit. I have the choice to redo entirely the computation in BigInt (very slow) or (better) to use SaferIntegers but then it is quite complicated even with a try block to switch to BigInt to go over an overflow case and back to SafeInt when possible…

So you are saying you repeated the computation 10^9 times?

Maybe try BitIntegers.jl, they work well for me, much faster than bigints

No. I said that the computation demanded 10^9 intermediate computations, which were all different, and that about 30 of them overflowed Int128. And that this is fairly typical of the kind of computations that I have to do in representation theory of Hecke algebras, one of my topics.