# About Testing Float32 values for equality

I’m searching for a floating point value in an array.
I already know that index 231 has the Float32 value of 0.011764707.

When I test for equality in the code below it fails the test.
Why are these values; Float32(value) and 0.011764707 not equal?
Am I testing wrongly?

Most likely because

Is not a Float64. Since this number is most likely not exactly representable by binary floating point (disclaimer, I didn’t check that) it’s closest approximation, I.e. the parsing result will be different for the two types even though the printing are the same. If you use a float32 literal (add a f0 at the end) the condition should be true assuming what you said is indeed the output of your program.

P.s. when you paste code you should make it either runnable or at least show the result you are confused about. The code you paste right now is no more than some random code that may or may not support what your question.

`0.011764707` is a `Float64` literal: it will give you the `Float64` closest to 0.011764707, which is exactly

``````julia> using Printf

julia> @printf "%.60f" 0.011764707
0.011764706999999999248451842959184432402253150939941406250000
``````

You want `0.011764707f0` which is a `Float32` literal, which has a slightly different value:

``````julia> @printf "%.60f" 0.011764707f0
0.011764707043766975402832031250000000000000000000000000000000
``````
1 Like

I did what both @yuyichao and @simonbyrne suggested and it still didn’t work properly. Here is a more complete, runnable version of the code:

``````using Flux, Flux.Data.MNIST, Statistics
using Flux: onehotbatch, onecold, crossentropy, throttle
using Base.Iterators: repeated, partition
using Printf, BSON

train_labels = MNIST.labels() # 60,000
train_imgs = MNIST.images() # 60,000

function make_minibatch(X, Y, idxs)
X_batch = Array{Float32}(undef, size(X[1])..., 1, length(idxs))
# Creates X-batch and the associated Y_Batch of labels
for i in 1:length(idxs) # This line loops from 1 to 128 (or whatever the lebgth of the iterator partition)
X_batch[:, :, :, i] = Float32.(X[idxs[i]]) # uses idxs iterator to place X[1 to 128] in Xbatch[28, 28, 1, (1 to 128)]
end
Y_batch = onehotbatch(Y[idxs], 0:9) # provides an efficient way of accessing labels

return (X_batch, Y_batch)
end

batch_size = 128
mb_idxs = partition(1:length(train_imgs), batch_size) # divides 60,000 into partitions or batches of 128

train_set = [make_minibatch(train_imgs, train_labels, i) for i in mb_idxs]
a = train_set[1,1]

value = (a[1])[231]
searchvalue = 0.011764707f0
println("typeof(value) = ", typeof(value))
println("typeof(searchvalue) = ", typeof(searchvalue))
println("for ", 231, " in a[1] = ", value)

if value == searchvalue
println("FOUND IT!!!")
else
println("Failed to locate it")
end
``````

Here is the output:
typeof(value) = Float32
typeof(searchvalue) = Float32
for 231 in a[1] = 0.11764707
Failed to locate it

You realized that the two numbers aren’t the same do you?

1 Like

@yuyichao
Yes, but I understood the 0.11764707f0 Float32 literal would be the same as 0.11764707 Float32 value, because the straight up 0.11764707 != 0.11764707…hence the problem…

No, assuming you didn’t have a typo in your code, did you realized that one of them was `0.0117....` while the other one is `0.117....`?

@yuyichao
OMG!!!

I am truly a complete idiot and hope you accept my apology for wasting your time!

I will make a strong effort not to do this again.

Thank you sir, for your help.

The issue about Float32 vs Float64 still apply so most of the time/effort was not wasted…

And just to note that this is why I asked for either the full code or the output… to confirm your claim…

@yuyichao
Thank you for the “full runnable code” idea; it made all the difference.

Thanks again.

1 Like