Hi there I tried exponentiating 2 to large powers with the ^ operator but when exponent gets large I get a zero.

Julia uses native integer arithmetic by default and these operations are overflowing. You can use Big Integers:

```
julia> big(2)^1600
44462416477094044620016814065517364315819234512137839319418223093753683069769152238984782576173969417485953521141049383745107056455283979316385016701612810119562585078620415976730705698345087039035930761275083827265405596065418173652685035788898113991627042329246850314029877161622487411877779578892097029690461532001915311366862468942148892205997883828265721290296220249202674740669814705818564765009960300389641843321936008416473775144511929246788246559538970957296160626364645376
```

Thanks! It would be safer if it returned an error instead of a 0.

In this case Julia is doing what C/C++ and Fortran usually do. That’s for performance reasons. An alternative is to use the package SaferIntergers.jl

If anyone could figure out a way to make this error with reasonable performance (which might be possible), that would be a welcome PR. The problem is the only way we’ve found so far that is reliable is to check for overflow on each multiplication operation which is roughly a 2x performance hit (see WIP: Introduce Overflow Checking for ^(::Integer, ::Integer) by Keno · Pull Request #21600 · JuliaLang/julia · GitHub).

so we have similar stuff for `factorial`

and

```
julia> binomial(90,30)
ERROR: OverflowError: binomial(90, 30) overflows
```

probably because we can exploit math (duh) to throw an error without compromising performance?

Yeah. Our general stance is that we will add overflow checks for integer operations where the overflow check is cheap relative to the operation (<20% or so). basic integer ops are roughly 2x to 10x (2x regularly 10x if it prevents hoisting bounds checks/ vectorization/other optimizations) and therefore are pretty much impossible to bounds check by default. Anything that includes a few divisions however tends to be slow enough that overflow checks become reasonable. powers are just on the edge where we really want `x^2`

and `x^3`

to be fast, but someone does run into this roughly once a week, so it would be nice if we could give a good error here, but doing so consistently without wrecking performance isn’t easy.

hard code first 10 natural number in `Val`

and abuse compiler? (although, base number might also be big…)

At the very least, overflow checks could be elided for `^(x::Integer,y::Integer)`

when `y>=0`

and `y * ceil(log2(abs(x))) <= maxlog2(typeof(x))`

. This computation is pretty cheap for most integer types (for nonnegative BitIntegers it’s some basic arithmetic based on `leading_zeros`

, most of which is probably already a part of `nextpow`

/`prevpow`

specializations for base=2). This would probably be a fast check for 99.99% of cases that won’t ultimately overflow. There’s a matching check based on `y*floor(log2(abs(x)))`

(EDIT: more simply, something like `y*ceil(log2(abs(x)))-y`

) that can quickly determine when an overflow *will* occur, leaving only a very narrow band of cases where further checks are necessary (and even then only on the final multiply, I think).

When one argument is small (eg, less than 20) and/or cases where either argument is a compile-time literal, it’s easy to compute the range of values for the other argument that will not overflow.

That actually might work. Want the honor of making the PR?

I’m pretty busy these days. Please, anybody else can feel free to take a shot at this and evaluate the performance cost.

It’d be worth exposing the unchecked exponentiation (which would likely be called from the normal `^`

after the check), e.g., `Base.unchecked_pow`

or `Base.unsafe_pow`

, in case people still need that behavior.

It would be `unchecked`

not `unsafe`

. `unsafe`

has a pretty specific definition that this doesn’t meet.