Standardizing vector of Decimals

I’m trying to add Decimal support to my database package; in order to load decimals into a table, this requires having a fixed Decimal definition such as the following:

create table test (col1 Decimal(7,4));

Within Julia, you can have Decimal[] where all of the exponents are different:

julia> decvec = Decimal[decimal(1.4549875498754), decimal(1.7), decimal(145498.75498754)]
3-element Array{Decimal,1}:
 Decimal(0, 14549875498754, -13)
 Decimal(0, 17, -1)
 Decimal(0, 14549875498754, -8)

Is there an already implemented way of standardizing the precision and scale? If this doesn’t exist and I do this myself, do I have to worry about floating-point error? I’m generally not a Decimal user, so I don’t want to make an erroneous assumption for those of you who care about exactness.

If you can bound the number of decimal digits you need, you can use decimal floating-point, e.g. with the DecFP.jl package. For example, the Dec128 type can represent any decimal number with up to 34 significant digits exactly (within a huge range of exponents).

1 Like

Thanks for pointing out the DecFP.jl package, I wasn’t aware of that.

For my problem, I’m trying to build support for what other people might want to load, so I’m not sure I can restrict things too much (other than building types for each of Decimal, Dec32, Dec64 and Dec128). For example, I anticipate a user having a DataFrame where some of the columns could be Decimal. So I’m imagining that I need to check all the values of the entire Decimal column to figure out the widest precision and scale, then set that in my create table statement.

Ultimately in the end I do need to pass an Int64 to our database backend though, which is what drew me to the Decimals.jl package in the first place.

Can you anticipate a user input with more than 34 accurate significant digits? Unless your users are doing number theory that would seem surprising.

That’s my point though, I don’t know what people might try to do (as long as it’s possible in Julia). But I think we’re talking about two different things.

From what I see from the DecFP package, arrays are auto-promoted to the widest type, which is great and makes the problem described in the original question moot:

julia> a = [Dec32(1.4567), Dec64(1.3456767645), Dec128(1000.459696)]
3-element Array{Dec128,1}:

But if someone passes an arbitrary precision Decimals[] vector, am I correct that I have to inspect every value and scale them all to the same precision? If so, does that actually work or will I introduce a numerical error (and thereby ruining the original Decimal value)?