“WolframAlpha” returns “30”. There are ways to conduct integer division differently.
Please provide a minimum working example (copy and paste the code you ran, and paste it inside backticks).
@dpsanders: this is probably about
julia> 6.0 ÷ 0.2 29.0
julia> 6.0-(0.2*29) 0.1999999999999993
help?> ÷ "÷" can be typed by \div<tab> search: ÷ div(x, y) ÷(x, y) The quotient from Euclidean division. Computes x/y, truncated to an integer. Examples ≡≡≡≡≡≡≡≡≡≡ julia> 9 ÷ 4 2 julia> -5 ÷ 3 -1
To be fair, the float 0.2 is larger than two tenth,
julia> 0.2 > 2//10 true
so dividing 6 by the float(!) 0.2 and rounding down exactly gives 29. But
julia> 6.0/0.2 30.0
adding to the confusion, as 30.0 is the closest
Float64 number to 6 exactly divided by Float64(2.0), which is smaller than 30 but very close to.
This is the doc string for
The quotient from Euclidean division. Computes x/y, truncated to an integer.
It is does not match what occurs with
julia> div(6.0, 0.2), trunc(6.0 / 0.2) (29.0, 30.0)
This is what really occurs
julia> # trd(x,y) like fld(x,y) is to floor(x/y), cld(x,y) is ceil(x,y) for trunc(x,y) julia> trd(x, y) = signbit(x) === signbit(y) ? fld(x, y) : cld(x, y) julia> div(6.0, 0.2), trd(6.0, 0.2) (29.0, 29.0) # test julia> div(6.0,0.2), div(6.0,-0.2), div(-6.0,0.2), div(-6.0,-0.2) (29.0, -29.0, -29.0, 29.0) julia> trd(6.0,0.2), trd(6.0,-0.2), trd(-6.0,0.2), trd(-6.0,-0.2) (29.0, -29.0, -29.0, 29.0)
Isn’t it strange that it’s defined on floating points at all? What’s the use case for these semantics?
julia> using IntervalArithmetic /( julia> /(6.0, 2.0, RoundDown) 3.0 julia> /(6.0, 0.2, RoundDown) 29.999999999999996 julia> /(6.0, 0.2, RoundUp) 30.0
This shows that the true result of dividing the Float64 6.0 by the Float64 written 0.2 is a real number between 29.999999999999996 and 30.0.
julia> big(0.2) 2.00000000000000011102230246251565404236316680908203125e-01 julia> big(6.0) / big(0.2) 2.999999999999999833466546306226528180918982845108534622869786706115556703123498e+01
Alas, docstrings are still written and “checked” by humans.
IIUC, the code tools (which were implemented by superhumans) agree with you:
julia-1.1> @code_lowered div(6.0,0.2) CodeInfo( 647 1 ─ %1 = $(Expr(:static_parameter, 1)) │ │ %2 = (Base.rem)(x, y) │ │ %3 = x - %2 │ │ %4 = %3 / y │ │ %5 = (Base.round)(%4) │ │ %6 = (Base.convert)(%1, %5) │ └── return %6 │ )
The former does exact mathematical division of the number represented by
6.0 (exactly the integer 6) and
0.2 (slightly less than 2/10), the division of which is slightly less than 30 and which result is then truncated to 29. The latter does float division of the same values and rounds the result to the nearest representwble float value, which is exactly 30. Only after rounding the intermediate result does it truncate—which doesn’t change anything since it’s already an integer. It’s a subtle difference but it’s consistent and correct.
In some finance applications, where it is vital to have an exact representation of cents even though numbers are in dollars, people use
Float64s with a constant scaling factor of
1//100. (For finer resolution, they typically use binary fractions of cents.)
It’d be fairly easy to create a Julia package to handle these kinds of numbers if people need it.
Rational is closed (exact) under
^ with integer powers (among other things), and should be the first choice for someone who wants this kind of precision (eg for finance, labels on plot axes, etc).
Also Decimals.jl does a good job (even though it seems
rem and maybe
div are not defined yet)
julia> using Decimals julia> Decimal(6) / Decimal(.2) Decimal(0, 3, 1) julia> Int(ans) 30
There’s a performance trade-off though. Computing exactly how many shares you can buy for $6.00 at a share price of $0.20 is much faster in scaled floats than using
Rational. Probably doesn’t matter for plot axes, but in high frequency trading, people are paying exorbitant rents for rack-space close to the stock exchange servers just to save a few nanoseconds on the time it takes to place an order…
For high frequency trading one has to deal with the weird nature of float™ anyways
It is curious to see how a simple misunderstanding about floating point evolved into requirements of HFT.
Presumably the companies paying those exorbitant rents can also hire developers who understand floating point, or implement a fixed point float solution that is good enough for their requirements.
Yeah, the reference to HF trading is perhaps a bit exaggerated. The point I wanted to make was: speed matters sometimes.
Look out for my paper in the AES julia special edition titled Optimal polynomial form characteristic methods, in it I have created a new mathematical terminology for floating point error bounds, if that sort of thing interests you. I will be posting the article on github also. In my paper I provide a detailed math explanation for why the floats are a pseudo-algebra and how it translates into abstract syntax tree optimization to select the best “equivalent” tree of operations. Actually, there is no equivalence, only a bound and characteristics that should be minimized. In the paper I introduce new characteristic methods to optimize floating point pseudo-algebra. Floating point arithmetic is actualy quite strange and interesting, there are still unexplored topics in that area, which nobody investigated yet.
There is also the DecFP.jl package, which wraps a library from Intel supporting the IEEE 754-2008 decimal floating point standard (32, 64, and 128 bit) (using the bid format).
It’s as fast as
BigFloat (when using BigFloat with 128-bit precision) (at least, last time I benchmarked the two of them), for at least the common operations that I’d tried, and doesn’t allocate anything on the heap.