Hi,
I am having a strange problem with the tests of one of my packages, MessyTimeSeriesOptim.jl, which is currently under registration.
I have cloned the dev
branch on my laptop and added the package via Pkg.add(path="...")
. All tests pass if I run them via include("./test/runtests.jl")
. However, if I use the more usual ]test MessyTimeSeriesOptim
they fail at the testset "DFM simulations: MLE"
.
Why is ]test
behaving differently from include("./test/runtests.jl")
?
I have been once solving problem like this and it was caused by bounds checking. When you run the code with include
, it inherits bounds checking from julia, if you test by ]test
bounds are always checked, even though there is an @inbounds
, i.e. it is overrided.
I see, thanks. However, I am not getting an error, it simply returns different floats at the end of the run. Is there anything else that gets overwritten with ]test
?
]test
runs with boundschecking forced to on (--check-bounds=yes
).
@kristoffer.carlsson:
If I include runtests.jl
with --check-bounds=yes
I get the same problem. However, if I remove all @inbounds
and @turbo
in my code and include runtests.jl
without --check-bounds=yes
all tests pass. Is --check-bounds=yes
doing anything else? Can it affect one of my dependencies rather than my package directly? Also, is it possible to highlight which lines behave differently with -check-bounds=yes
?
The ones with @inbounds
. Usually the numeric difference comes from the code being able to use SIMD or not depending on the bounds checking behaviour.
1 Like
Right, but I removed all @turbo
and @inbounds
now, and I am not using @simd
directly. Can it be caused by some dependency?
It’s actually places that use @simd
that are interesting. Could be in Base.
1 Like
Thanks. I have double checked everything and the differences are minimal with/without check bounds. However, I was using a very small tolerance for the tests that were failing. I suppose that these small numerical differences are coming from somewhere in Base.