Inconsistent Test results between VScode TestItems.jl and Test.jl

Hi all,

When testing a package I develop, I had an inconsistency in the result of the unit test of one of my functions between the testing framework TestItems.jl used within the VSCode UI and Test.jl.

Here is a code sample for reproducibility purpose

using TestItems
using TestItemRunner

@run_package_tests # For using ] test

@testitem "Test" begin
    Δt = 1e-6
    tf = 7e-2
    t = 0.:Δt:tf

    Ft = zeros(length(t))
    tb = 8e-3
    duration = 5e-2

    tm = (2tb + duration)/2.
    pos_beg = argmin((t .- tb).^2.)
    pos_m = argmin((t .- tm).^2.)
    pos_end = argmin((t .- tb .- duration).^2.)
    amp = 2/duration

    Ft[pos_beg:pos_m] = amp*(t[pos_beg:pos_m] .- tb)
    Ft[pos_m+1:pos_end] = 1. .- amp*(t[pos_m+1:pos_end] .- tm)

    @test sum(diff(Ft)) == 0.

When using TestItems.jl within the VSCode UI, the test is successful (I have also checked this assertion in the REPL). However, when using ] test, the test fails and the following message is displayed:

Test: Test Failed at \test\runtests.jl
  Expression: sum(diff(Ft)) == 0.0
   Evaluated: 2.220446049250313e-16 == 0.0

On my machine, 2.220446049250313e-16 is the machine epsilon. Of course, because floating-point arithmetics, some differences can occur between what we expect and what is computed. However, in the present case, it seems that it is not the case, since when I execute sum(diff(Ft)) in the REPL the result is 0. as expected.

Is it a bug or is it the expected behaviour ?

Thanks for your comments

I cannot say much regarding TestItems.jl, but I always use the isapprox function when comparing floats.
You can also use the binary operator (\approx<TAB>) which is equivalent to isapprox with default arguments.

julia> 2.220446049250313e-16 ≈ 0.0

julia> isapprox(2.220446049250313e-16, 0.0; atol=eps())

Nevertheless, for your example, I can reproduce the behavior.
Maybe it has more to do with the computation itself, as the same error occurs when using the standard approach via @testset.

Thanks @kfrb for your answer.

As you said the same error occurs when using the standard approach via @testset.
When the code is executed in the REPL, the test is passed, while when it is executed using ] test, it fails.

I agree with you regarding isapprox. However, it raises a problem of reproducibility. I don’t know how to trace the computations done when using ] test. I will try to figure out what happens here.

] test runs with bounds checking on which can inhibit some optimizations like SIMD (which can change the result slightly). And indeed, running the code in a julia with --check-bounds=yes gives

julia> sum(diff(Ft)) 
1 Like

Thank you very much @kristoffer.carlsson for this explanation that clarifies everything.