Julia>1.2 + 0.12

1.3199999999999998

Excellent news, itâ€™s within floating point precision of the true result. And welcome to Julia!

â€¦and just to be clear: this happens with any programming language that uses floating point arithmeticâ€¦ and that pretty much covers them all.

This is the post you should read to get a quick overview of floating point issues: PSA: floating-point arithmetic

Julia should probably cover this up by rounding the printed results, this behaviour can be pretty confusing for beginners.

you can use `BigFloat`

with arbitrary precision to check if your algorithm looses precision.

To the best of my understanding it is as serious as in other languages. If you need exact results, use the decimal package mentioned above

Decimals.jl (and similar packages like the more full-featured DecFP.jl) only eliminate rounding errors in converting human decimal inputs into their computer representations. Arithmetic operations can still yield a result that has to be rounded, and hence can still incur roundoff errors.

But this is true for all computer languages: it is intrinsic to the available ways for computers to represent numbers.

Understanding roundoff accumulation is a complicated subject, but a quick test is to repeat your calculations in different precisions to see how sensitive your answer is to roundoff errors. You can also use something like IntervalArithmetic.jl, but this will often give overly pessimistic error estimates.

As other people have said, this is same with any programming languages using floating point arithmetic, and in the low level, they actually use same or similar commands to carry out the computation and they follow the same rules.

This is the result from python 3.6 on my mac:

```
Python 3.6.3 |Anaconda custom (64-bit)| (default, Oct 6 2017, 12:04:38)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> 1.2+0.12
1.3199999999999998
>>>
```

And this is the result from R:

```
> 1.2+0.12
[1] 1.32
## Great! Isn't this the true result? But wait.... Let's try to print more decimal points...
> sprintf("%.15f",1.2+0.12)
[1] "1.320000000000000"
> sprintf("%.16f",1.2+0.12)
[1] "1.3199999999999998"
> sprintf("%.17f",1.2+0.12)
[1] "1.31999999999999984"
```

The reason why the original result in `R`

seems â€ścorrectâ€ť is only because `R`

uses a lower default precision to show the result, but under the hood, it is still the same.

I disagree; I hate when languages do this. If someone finds it surprising, perhaps they will use it as an opportunity to learn about floating point arithmetic. I donâ€™t mind seeing this question over and over.