Mostly yes.
ChangePrecision works by taking float literals (which are indeed stored in the AST as Float64
) and printing them to a string, then re-parsing this string in the desired numeric type. In almost all cases, the grisu algorithm used to print floating-point values will print something equivalent to the original decimal input string, since it prints the shortest string that parses to the same value.
For example, 0.3
parses to a Float64
value that is not quite 0.3
, but it prints as "0.3"
, so ChangePrecision will preserve this exact value when converting to Dec64
, or for BigFloat
it will give you BigFloat("0.3")
. However, it is not 100% reliable — in the unlikely event that you enter the literal 0.099999999999999999
, ChangePrecision will treat it as if you had typed "0.1"
(== repr(0.099999999999999999)
).
ChangePrecision is convenient for quick hacks, I think — taking an existing script and quickly experimenting with another precision. But in the long run, you should strive to write libraries of code that work for any precision (indeed, any Number
type), and if you need absolute guarantees of representing human inputs exactly you should probably use DecFP (or similar) directly.