I was building v1.3.0 from source, and out of habit ran
make testall to make sure the tests pass with my build before I started using it. I had 52 tests fail from a variety of stdlib packages, so naturally I tried building again without my Make.user tweaks - and got the same 52 failed tests. Then I downloaded the binary distribution, and ran the same tests (located at
share/julia/test/runtests.jl) - and still, the same 52 tests were failing.
This seems odd, so I took a look at one of the tests that failed, from the LinearAlgebra package, in the file
@testset "issue #29644" begin
F = lu(Tridiagonal(sparse(1.0I, 3, 3)))
@test F.L == Matrix(I, 3, 3)
@test startswith(sprint(show, MIME("text/plain"), F),
The offending test was the second one, where the result I get for
sprint(show, MIME("text/plain"), F)
SparseArray. on the text representation of the types.
Even more strangely, when running the julia binary on just this test file
share/julia/stdlib/v1.3/LinearAlgebra/test/tridiag.jl, the test passes.
I’m not super concerned about the result of this particular test, as the code seems to be working. Just wondering if
make testall gives a number of test failures for everybody or whether if I’m doing something basic wrong.
If you want to check whether your experience is typical, try opening some of the pull requests to Julia itself and examine the “checks.” In general you’ll see that the large majority of runs pass (there are some platforms that seem flakier than others, though). I’ve not tested 1.3 in a while, but in general
master passes (otherwise it makes it hard to do development).
This may be a subtle bug in the
testall target somehow being different from the way that tests are run on CI (which I think has generally been done by directly calling julia from the CI script). You can try comparing the working directories and arguments given to the
choosetests.jl test runner.
In the past I’ve hit weird edge cases where the test runner can behave differently depending on the number of tests which are invoked. IIRC due to the wrapping of tests within a test module and the detail of how tests are distributed to parallel workers.
Julia prints the module prefix or not depending on if the Symbol is available in
Main (a well intended feature but arguably kinda strange). It likely has something to do with that.
I just ran the 1.3.0 tests on Linux and I get failures in Dates/io, of the sort:
Test Failed at ~/julia/julia-1.3/usr/share/julia/stdlib/v1.3/Dates/test/io.jl:528
Expression: uppercase(t12) == Dates.format(t, "II:MMp") == Dates.format(d, "II:MMp") == Libc.strftime("%I:%M%p", tmstruct)
Evaluated: "12:00AM" == "12:00AM" == "12:00AM" == "12:00"
I suspect that that has something to do with the “locale” settings on my machine. Other than that all good, no issues in LinAlg.
(But I did notice that the test suite used at times most of my 16GB of RAM. Seems a bit excessive.)