Hello all, I am learning Julia through the MIT computational thinking course, and am curious if it’s possible to floor within a normed or fixed point type. Normally, I expect a number, once floored, to stay that same number even if floored again. However, because the flooring function seems to be limited in the types it can return, code like this:
floor(N0f8(0.33), digits=1) |> N0f8
0.298N0f8, which if passed to the exact same function:
floor(0.298N0f8, digits=1) |> N0f8
0.2N0f8. This leads to an inconsistency in the behavior of a generic quantization function I wrote, where floating point types will be idempotent to requantization, but fixed-point types because of the above behavior will have some values that continue to change when requantized.
I’m only doing these things for learning purposes, so if this problem is truly of no practical value I could understand there not being a good way to do this. But I am curious if it’s possible to use floor in such a way that code like
floor(N0f8(0.33), digits=1) |> N0f8 would stay at the nearest fixed-point value above 0.3 rather than going below it. Thank you.
Here’s the code I wrote that led to me noticing this issue when quantizing images of different RGB types:
r = quantize(ColorTypes.red(color))
g = quantize(ColorTypes.green(color))
b = quantize(ColorTypes.blue(color))
typeof(color)(r, g, b)
The problem is that the biggest
N0f8 not larger than
0.329, whereas the biggest one not larger than
0.298. If you floor with only one decimal digit of precision, the algorithm rightly chooses ever smaller numbers until it reaches a number with one digit that is exactly representable as x/256. (Remember, N0f8 is the type representing all fractions of that form!). In this case, that’s
Note the documentation of
floor(x) returns the nearest integral value of the same type as x that is less than or equal to x.
Oh huh, wow, that is very interesting and makes a lot of sense. I think the take-away for me here is that I should be choosing a smarter quantization algorithm if idempotence mattered to me! Or at least write a different method for normed numbers in this case.
The only remaining question I have is that
floor(N0f8(0.33), digits=1) returns
0.3f0, which is a Float32 and not “the nearest integral value of the same type as x”?
Edit: Oh, the answer is that it only does this when I’m not using the
digits parameter. I tried
floor(N0f8(0.33)) and just got
0.0N0f8, which is as expected (though not particularly useful ).
Seems to be because having the
digits keyword falls back to
round(x::Real, ...) which itself calls
float(x) further down the chain. The result, of course, is a floating point value. Those kinds of problems are probably also going to happen with regular floating points - at some point, the precision is just highter than the number of digits you have and you’re going to slide further down for longer before stabilizing.
By the way, you can investigate things like this yourself by e.g. using Debugger.jl and doing
@enter floor(N0f8(0.33), digits=1) and stepping through the execution manually.
In all fairness though, since FixedPointNumbers.jl breaks the contract of floor here, they should probably implement those methods specially to avoid returning a different type than the input type
Question about this,
big(.6) shows that
floor(.6,digits=1)=.6, while this thread suggests that it should result in
.5. What am I missing?
0.6 isn’t actually stored as
0.6 and also definitely not the smaller
0.59..., but actually as
0.600000023842 (for Float32). The flooring algorithm, if given digits, internally multiplies by
10^digits (in the easy case), rounds down the result to the next representable number, and divides by
10^digits again. Because
N0f8(0.3) == 0.298, the algorithm already begins with
0.2..! In the case of
floor(0.600000023842, digits=1), this results in
round(6.00000023842, RoundDown) == 6.0, which is exactly representable and the division brings it back to the same value again.
Floating point arithmetic is carefully designed to take care of these edge cases, while FixedPointNumbers.jl seems to be a little more lenient here
I was looking at the Float64 case, where .
6f0 is smaller than
It’s (almost) the same for Float64, I just couldn’t find anything that gave me a more exact decimal presentation for Float64. You can play around with the numbers here.
0.6f0 gives you a Float32, that’s not smaller than
julia> 0.6f0 < 6//10
As every language, julia lies a little when printing floating point numbers to the screen - it’s just one of the many limitations of using them, there’s no way around that.
I agree though, it’s definitely a little weird that
_not really 0.6_ * 10 == 6.0 and that
_actually 6.0_ / 10 == _not really 0.6_, but at least it’s internally consistent
Normed is scaled by
2^f-1 instead of
2^f, which can cause rounding problems when it is calculated via floating-point numbers. Supporting only
base=10 isn’t too hard, but the general support is a bit of a pain.
IEEE 754 defines “correct rounding” as converting an infinitely precise result to a floating point number. This depends on the equation used.