CircleFit testing fails for v1.9.0-beta2 on github, but works locally

I recently tested my package for fitting circles locally with v1.9.0-beta2 and the tests passed. Now I added the version to the github workflow but version v1.9.0-beta2 fails the tests.

Does anyone have an idea how to fix this?

Here is the log on github:

Maybe someone can try to run the tests locally, so I can check if my setup is somehow broken.

using Pkg
Pkg.activate(;temp=true)
Pkg.add("CircleFit")
Pkg.test("CircleFit")

Looks like the problem might arise here:

circfit(x,y) = kasa(x,y)

@deprecate circfit(x,y) StatsBase.fit(Circle,x,y) false

Which of these methods do you want to use? The latter is overwriting the former. Looking at the docstrings for @deprecate on v1.8 and v1.9, there are some type signature-related changes, but I’m not sure if those apply here.

I think I fixed the warning caused by @deprecate, but the test still fails. Any other ideas?

Very strange - it passes locally for me on v1.9.0-beta2, too. :taubin is the only fitting algorithm that’s failing - maybe add some @show statements for debugging and do another CI run?

What did you execute to get :taubin to fail?

Sorry, I meant that the CI test failures are only for test/Circle.jl lines 36-38, which correspond to the coefficient tests for :taubin.

M has a really high condition number, which might be causing eigen to blow up with whatever eigensolver the CI machine is using:

julia> cond(M)
852530.2099417784

It seems LinearAlgebra.eigen produces different outputs.

I have added some @show macros to the ci_taubin branch and CI shows different results for:
F = eigen(M,C)

For 1.8 I get:

M = [163204.0 8080.0 8080.0 804.0; 8080.0 402.0 400.0 40.0; 8080.0 400.0 402.0 40.0; 804.0 40.0 40.0 4.0]
C = [3216.0 80.0 80.0 0.0; 80.0 4.0 0.0 0.0; 80.0 0.0 4.0 0.0; 0.0 0.0 0.0 0.0]
F = LinearAlgebra.GeneralizedEigen{Float64, Float64, Matrix{Float64}, Vector{Float64}}([6.530180190531218e-13, 0.4999999999999731, 0.5000000000000021, Inf], [-0.005025125628139809 -6.825790392877369e-16 -2.3657900227256802e-15 1.6529425944801856e-18; 0.10050251256280508 -0.04999999999999314 -1.0 7.610423195419188e-18; 0.10050251256280501 -0.04999999999999311 0.9957942739718294 7.507114283264175e-18; -1.0 1.0 0.042057260282185774 1.0])

In 1.9.0-beta2:

M = [163204.0 8080.0 8080.0 804.0; 8080.0 402.0 400.0 40.0; 8080.0 400.0 402.0 40.0; 804.0 40.0 40.0 4.0]
C = [3216.0 80.0 80.0 0.0; 80.0 4.0 0.0 0.0; 80.0 0.0 4.0 0.0; 0.0 0.0 0.0 0.0]
F = LinearAlgebra.GeneralizedEigen{Float64, Float64, Matrix{Float64}, Vector{Float64}}([-Inf, -3.1907188949449455e-12, 0.4999999999999454, 0.5000000000000019], [5.509808648267286e-19 0.005025125628139395 -1.9866962916746387e-14 -3.3896807769822033e-15; -3.2611179936932e-17 -0.10050251256280036 -0.04999999999980008 -0.9990369418434482; -3.509059382865228e-17 -0.10050251256280139 -0.04999999999980058 1.0; -1.0 1.0 1.0 -0.009630581564836144])

What is strange is that the LinearAlgebra ([37e2e46d]) version is the same in both cases. So something in the BLAS ([4536629a] OpenBLAS_jll v0.3.21+0 VS [4536629a] OpenBLAS_jll v0.3.20+0 ) seems to behave differently.

However, I also have [4536629a] OpenBLAS_jll v0.3.21 installed locally, and I don’t see any difference.

Here is the failing CI log: github action

julia> cond(M)
852530.2099417784

julia> cond(C)
Inf

You can’t rely on numerical linear algebra to produce correct results when the condition number is infinite. Taubin’s paper also suggests using Newton’s method to find the solution - have you tried that? I’m not sure what’s changing, though - the finer details of BLAS versioning are beyond me. Paging @vchuravy, who may know more.

You are correct. My plan was to use Newton’s method which is also the reason why I consider the method not optimized. However, I did not expect the eigenvalue calculation to suddenly fail.