Scaling Big Ints? Rounding?

Hey guys i have a unique problem. I ingest raw hex data

                println(key.second["hex"])
                println(parse(BigInt, key.second["hex"]) )
                println(floor(BigInt, parse(BigInt, key.second["hex"]) ))
                println(floor(BigInt, parse(BigInt, key.second["hex"]) ) / 0.0000000001 )

Output:

0x5781f15b552ec5c60b

1614230099934426023435

1614230099934426023435

1.614230099934425964625050486573355256443030648984935086684656097291562845932778e+31

I’ve tried many different paths to try to digest and round or floor the numbers to achieve something like
1614.230099934426023435

I need to move the decimal place of the bigint 1614230099934426023435 to be in the hundreds or thousands place for my math to work as efficiently as it needs to. If i parse as float and try to trunc or floor my way to an int, i always end of with something in the tens place with a massive 1e# exponentiation. I’ve also scaled by multiplying by say 1e-10, but again leaves me with a number thats similar to 1.6xxxxxx1e-10

Any advice or strategy to scale down the bigint in the manner i specified? Been knocking my head playing with the numbers for days now, i hope one of you pros can show me the hopefully obvious path!

Cheers

The problem is that decimal literals only have Float64 precision. If you’re trying to scale a BigFloat, use a big"" literal. You could also do something like this:

julia> rescale(v, n) = big(v) / exp10(n)
rescale (generic function with 1 method)

julia> rescale(0x5781f15b552ec5c60b, 18)
1614.230099934426023435000000000000000000000000000000000000000000000000000000008
2 Likes

I’m confused. What type of arithmetic do you want to use? BigInt, BigFloat, Float64, decimal floating point, …?

Are you doing computations which are exact (rare for nontrivial non-integer calculations!) or will they involve some roundoff error, and if it’s the latter why are you messing with BigInt and what is wrong with just 1614230099934426023435.0? Or if you want exact BigInt calculations why are you trying to move the decimal place?

2 Likes

@stillyslalom that may work!! Have to run to an appointment but will try out shortly! Thankyou :smiley:
quick question, how can i then shave off the 0’s? i guess in this case i could multiply by 1e-# to remove - may have to play with that too when im back

@stevengj the complexity is that the arithmetic im using fails terribly with numbers over a length of 1e10 or so. And the numbers im using are massive, so i need to scale them down and then reinflate them post getting my answer. The only real success i have had with accurate computation is with numbers that are scaled to the hundreds or thousands place even if thats not their actual value :confused: But yeah super odd but thats why im doing nonsensical mutation

appreciate you both for the replies :slight_smile:

It would be helpful if you could post an example of the sort of calculation that’s causing you problems, since addressing those problems directly will be more robust than trying to work around them by scaling.

1 Like

Why not use floating-point arithmetic? Why are you using bigints at all?

1 Like

@stillyslalom i must admit the math im using is something im trying to learn as i go – convex optimization. GitHub - bcc-research/CFMMRouter.jl: Convex optimization for fun and profit. (Now in Julia!)

i use real market data and those are massive numbers in the world of crypto.

@stevengj i have tried with float64 128 and 32, but because numbers are still with the tens place 1.xxxe1e10x even when scaled down by say 1e-10 - the generated ‘path’ im supposed to get as output is all sorts out of wack. From dissecting the repos tests, it really seems to work best at scale, when the numbers ingested are like i proposed. BigInt or Int scaled to the hundreds or thousands place.

ex:

7.858288944525227, 196.16731006788325
387.64963409968414, 741.404696807845
961.3189263482, 679.7504055891025

going to give the first suggestion a try now

Float64 can represent numbers > 10^{300}. If you are overflowing that with market data you probably have a bug.

In general, if you think you need bignums to deal with real-world data it’s a good indication you are making a mistake.

2 Likes

This is usually true. And with an optimization problem, note that if f(x) is a positive function, and f(x) is optimized at x* then log(f(x)) is optimized at x* too, because log(x) is a monotone increasing function. So when you try to optimize something and you’re using crazy weird numbers, usually you should be calculating logarithms.

2 Likes