Yeah the package *just* came out. Its pretty neat!

Thanks for the encouragement.

I just released v0.0.2, it is tighter and has some corrections.

There is now the start of a background section in the README.

I think your work could be a gate to faster BigInt .

I would bet that a type Union(BigInt,SafeInt) where you would switch to BigInt on overflow instead of throwing

an error would be faster than the current BigInt for many computations (which need Big integers, but only infrequently).

Hello, I have a small problem with SaferIntegers, still using the same code:

```
function collatz(lim)
max=1
for i in 1:lim
c=0
n=SafeUInt64(i)
while n!=1
if n&1==0 n>>=1
else n=3*n+1
end
c+=1
end
if c>max
max=c
println("at $i new max=$max")
end
end
end
collatz(1000)
ERROR: MethodError: >>(::SaferIntegers.SafeUInt64, ::Int64) is ambiguous. Candidates:
```

(x::T1, y::T2) where {T2<:Integer, T1<:SaferIntegers.SafeInteger} in SaferIntegers at /home/jmichel/.julia/v0.6/SaferIntegers/src/binary_ops.jl:54

(x::Integer, c::Int64) in Base at operators.jl:528

Possible fix, define

(::T1<:SaferIntegers.SafeInteger, ::Int64)

Stacktrace:

[1] collatz(::Int64) at /home/jmichel/collatz.jl:7

that is a bug – thank you for finding it … on it

@Jean_Michel this should get you through the night:

```
function collatz(lim)
max = 1
for i in 1:lim
c = 0
n = SafeUInt64(i)
while n != 1
if n&1==0
n = fld(n,2) # <<< this is the only edit
else
n = 3*n+1
end
c += 1
end
if c>max
max = c
println("at $i new max=$max")
end
end
end
collatz (generic function with 1 method)
julia> collatz(1000)
at 3 new max=7
at 6 new max=8
at 7 new max=16
at 9 new max=19
at 18 new max=20
at 25 new max=23
at 27 new max=111
at 54 new max=112
at 73 new max=115
at 97 new max=118
at 129 new max=121
at 171 new max=124
at 231 new max=127
at 313 new max=130
at 327 new max=143
at 649 new max=144
at 703 new max=170
at 871 new max=178
```

Thank you! (I found myself the workaround). I think your package is very useful.

I wonder if a type Union{SafeInt , Bigint} where on overflow conversion is done could work and be faster in the above example (where most Ints are small, very few are big) than using BigInts

I did take your earlier note seriously. I am partway rewriting it and it seems to me that there could a designated SafelyWidens type. Most of the time, it is good not to widen and find the issue and then run larger if appropriate. I do understand there are important uses where one wants the widening – but do you really want to jump into BigInts? What about expanding to Int128 and sitting there?

My need arose while porting a part of the group theory system GAP to Julia. In this system you meet integers

that need to be exact but can be arbitrarily big (like the order of the monster simple group, or of the finite group of Lie type E_8 over the field with 2 elements). Most computations produce *in practice* small integers most of the time, but occasionally big ones, whose size you cannot predict in advance; the behavior of my toy example is similar, which is why I gave it.

Of course different applications may have different needs. Union{SafeInt64, SafeInt128} may be what’s needed

in some other application.

I know of GAP. Once SafeInts get larger than Int128, they are unlikely to regroup – is that a problem? I cannot extend Safety beyond Int128.

Not regrouping — you mean not converting back automatically to a smaller type when possible? It could probably be a problem sometimes, but much more uncommon than the need for widening: it would be uncommon enough that doing it manually when needed (at the end of a computation, or some chosen

step in it) would suffice.

I made changes that might solve the shifting.

For organizational sanity, they reside on another branch: “shift”.

If you need help to get that branch being the one you use – ask.

(I cannot test it tonight, so let me know of any fixups)

I think you are simply misusing BigInt. I get much better timings by using inplace operations (the following works only on 0.7; I don’t know whether the GMP/MPZ wrapper exists in 0.6)

```
#using Int
@btime collatz_small(5000)
736.384 ��s (0 allocations: 0 bytes)
#using BigInt, essentially your code
@btime collatz_slow(5000)
134.810 ms (3111832 allocations: 53.44 MiB)
#using inplace ops
@btime collatz_fast(5000)
8.832 ms (3 allocations: 48 bytes)
```

The code:

```
using Base.GMP.MPZ
using BenchmarkTools
function collatz_small(lim)
max=1
i=1
while i<lim
i+=1
c=0
n=i
while n>1
if n&1==0
n>>=1
else
n=3*n+1
end
c+=1
end
if c>max
max=c
#println("at $i new max=$max")
end
end
max
end
function collatz_slow(lim)
max=1
i=1
while i<lim
i+=1
c=0
n=BigInt(i)
while n>1
if n&1==0
n>>=1
else
n=3*n+1
end
c+=1
end
if c>max
max=c
#println("at $i new max=$max")
end
end
max
end
function collatz_fast(lim)
max=1
i=UInt(1)
n=BigInt(0)
while i<lim
i+= 1
c=0
MPZ.set_ui!(n,i)
while n>1
if !(MPZ.tstbit(n,0))
MPZ.fdiv_q_2exp!(n,1)
else
MPZ.mul_ui!(n,UInt(3))
MPZ.add_ui!(n,UInt(1))
end
c+=1
end
if c>max
max=c
#println("at $i new max=$max")
end
end
max
end
@btime collatz_small(5000)
@btime collatz_slow(5000)
@btime collatz_fast(5000)
```

I do not think I am “misusing BigInt”. I assumed that BigInt can be used in a natural way to write natural code,

which can be written the same way independently of the datatype used.

Honestly, if one has to write the kind of direct-calling-C library code that you do, then I do not see the point

of using a high-level language like Julia. Perhaps your code shows that the current implementation of BigInt is misusing MPZ.

Perhaps your code shows that the current implementation of BigInt is misusing MPZ.

You are of course completely right, code like the above only makes sense post profiling and is generally a pain. Unfortunately I don’t think that this is easy to fix on the language level, in a way that does not misuse MPZ but is just as easy to use as Int (fundamentally because BigInt want to be mutable, while Int wants to be immutable).

I think there could be syntactic sugar to make the above code less ugly, because you are right that “julia can be just as fast as C (if you write it to look just like C)” is not exactly a shining endorsement.

I hope something could be done with macros, or by appropriating dot-notation (`broadcast!`

), in order to make fast `BigInt`

code just as medium-easy to write as non-allocating code handling arrays: use in-place ops, hoist allocation of temporaries out of loops, rarely use direct MPZ/BLAS calls only if the syntactic sugar fails; loop fusion does not really apply to BigInt, since MPZ does not really offer a lot of fused operations in the API (and the libgmp addmul is not even exposed by `Base.GMP.MPZ`

).

I would hope for an even more greedy solution: Ensure that this more complicated in-place code can be written to also work for immutables / plain `Int`

. This is AFAIK a huge problem in the differential equation community, where you ideally would write code that is generic for arrays and static arrays (which doesn’t really work today if you want speed). Of course this cannot work every time: There are in-place algorithms that do not really work with immutables (because of scoping), and `Ref`

appears to be too slow (according to ChrisRackauckas).

Well, it’s not really an issue for us because I just double our code and handle both cases manually via dispatch. Just like handling BigFloat efficiently, that it isn’t the ideal way to handle it though but it’s what we have for now. Of course there are issues in Julia Base to fix up these kinds of things.

But there is not really an `Array <: AbstractMutableArray <: AbstractArray`

and `SArray <: AbstractImmutableArray <: AbstractArray`

, afaik?

So you would either need more concrete containers or `@generated`

in order to correctly dispatch? And the same for `MutableNumber <: Number`

and `ImmutableNumber <: Number`

for scalars.

We’re dispatching on function signatures. Now that’s a trip