Julia>1.2 + 0.12
Julia>1.2 + 0.12
Excellent news, it’s within floating point precision of the true result. And welcome to Julia!
…and just to be clear: this happens with any programming language that uses floating point arithmetic… and that pretty much covers them all.
Also you may want to consider
This is the post you should read to get a quick overview of floating point issues: PSA: floating-point arithmetic
Julia should probably cover this up by rounding the printed results, this behaviour can be pretty confusing for beginners.
Thanks very much. But I am worried that the error accumulation is serious.
you can use
BigFloat with arbitrary precision to check if your algorithm looses precision.
To the best of my understanding it is as serious as in other languages. If you need exact results, use the decimal package mentioned above
Decimals.jl (and similar packages like the more full-featured DecFP.jl) only eliminate rounding errors in converting human decimal inputs into their computer representations. Arithmetic operations can still yield a result that has to be rounded, and hence can still incur roundoff errors.
But this is true for all computer languages: it is intrinsic to the available ways for computers to represent numbers.
Understanding roundoff accumulation is a complicated subject, but a quick test is to repeat your calculations in different precisions to see how sensitive your answer is to roundoff errors. You can also use something like IntervalArithmetic.jl, but this will often give overly pessimistic error estimates.
julia> round(1.2 + 0.12, digits=4) 1.32
As other people have said, this is same with any programming languages using floating point arithmetic, and in the low level, they actually use same or similar commands to carry out the computation and they follow the same rules.
This is the result from python 3.6 on my mac:
Python 3.6.3 |Anaconda custom (64-bit)| (default, Oct 6 2017, 12:04:38) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> 1.2+0.12 1.3199999999999998 >>>
And this is the result from R:
> 1.2+0.12  1.32 ## Great! Isn't this the true result? But wait.... Let's try to print more decimal points... > sprintf("%.15f",1.2+0.12)  "1.320000000000000" > sprintf("%.16f",1.2+0.12)  "1.3199999999999998" > sprintf("%.17f",1.2+0.12)  "1.31999999999999984"
The reason why the original result in
R seems “correct” is only because
R uses a lower default precision to show the result, but under the hood, it is still the same.
I disagree; I hate when languages do this. If someone finds it surprising, perhaps they will use it as an opportunity to learn about floating point arithmetic. I don’t mind seeing this question over and over.
The keyword is: “accumulation error”.
You are right, if your calculation takes hours, the error may become too big to accept. Using the calculation with “No error” or BigFloat may be too slow, And Float64 also can not satisfy your requirement, I suggest you can try:
Fixed point numbers should be used if one wants the results to be accurate. You don’t always need to use a library for this task. If you’re dealing with money, using cent count (integer) instead of dollar value for internal calculations can solve the precision issue.