Inv not working in mac?

Hello,

So I have just installed julia 1.10.2 on a mac (M1) and I try running

A = [1 2 3;4 5 6;7 8 9]

inv(A)

and I expect a singular error. However, I actually get a matrix as an output. What is weirder is that if I do the same on a Windows or a Linux machine running arch, I do indeed get an error. Is this something to do with Mac or is there a problem with how I am running my code? For reference, I installed julia 1.6.4 on mac and the same code does in fact throw an error.

Is there any intuition behind this?

Determining if a matrix is singular or not is an inherently unstable computation. Different orders of operations of cut-offs on the condition number will give different results.

julia> using AppleAccelerate

julia> A = [1 2 3; 4 5 6; 7 8 9]
3Ă—3 Matrix{Int64}:
 1  2  3
 4  5  6
 7  8  9

julia> using LinearAlgebra

julia> cond(A)   # condition number
5.015946803014265e16
julia> inv(A)    # an unstable approximation to the inverse
3Ă—3 Matrix{Float64}:
  3.15252e15  -6.30504e15   3.15252e15
 -6.30504e15   1.26101e16  -6.30504e15
  3.15252e15  -6.30504e15   3.15252e15

julia> versioninfo()
Julia Version 1.10.2
Commit bd47eca2c8a (2024-03-01 10:14 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: macOS (arm64-apple-darwin22.4.0)
  CPU: 8 Ă— Apple M1 Pro
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, apple-m1)
Threads: 6 default, 0 interactive, 3 GC (on 6 virtual cores)
Environment:
  JULIA_EDITOR = code

This matrix is singular in theory but not computationally singular in the tolerances used here.

1 Like

I also get a matrix on my phone.

julia> A = [1 2 3;4 5 6;7 8 9]
3Ă—3 Matrix{Int64}:
 1  2  3
 4  5  6
 7  8  9

julia> inv(A)
3Ă—3 Matrix{Float64}:
  3.15252e15  -6.30504e15   3.15252e15
 -6.30504e15   1.26101e16  -6.30504e15
  3.15252e15  -6.30504e15   3.15252e15

julia> versioninfo()
Julia Version 1.10.0
Commit 3120989f39b (2023-12-25 18:01 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Linux (aarch64-linux-gnu)
  CPU: 6 Ă— Cortex-A55
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, cortex-a55)
  Threads: 1 on 8 virtual cores

I check eps() on all devices and they yield the same value. Wouldn’t I expect a different eps() across devices if the tolerance was different? Or how could I change the default tolerance for the inv function?

I believe that eps() is just based on the floating point representation and essentially all architectures use the IEEE Floating Point Standard and would return the same value. The differences could show up in BLAS implementations because floating point arithmetic is not associative and the order of operations can affect the result.

It is not so much the inv operation as the LU factorization that determines if the matrix is deemed singular

julia> lu(A)
LU{Float64, Matrix{Float64}, Vector{Int64}}
L factor:
3Ă—3 Matrix{Float64}:
 1.0       0.0  0.0
 0.142857  1.0  0.0
 0.571429  0.5  1.0
U factor:
3Ă—3 Matrix{Float64}:
 7.0  8.0        9.0
 0.0  0.857143   1.71429
 0.0  0.0       -1.58603e-16

You can see that the U matrix is essentially singular. I’m not sure where the check on singularity of the LU factorization is performed.

On an x86 linux system, I see

julia> inv([1 2 3;4 5 6;7 8 9])
ERROR: LinearAlgebra.SingularException(3)
julia> inv(Number[1 2 3;4 5 6;7 8 9])
3Ă—3 Matrix{Number}:
 -4.5036e15    9.0072e15   -4.5036e15
  9.00720e15  -1.80144e16   9.0072e15
 -4.5036e15    9.0072e15   -4.5036e15

The TLDR is that whether inv succeeds or fails depends on the implementation of the LU factorization which depends on BLAS library and the vector width of your CPU (and possibly other things).

3 Likes

It only throws an exception if it hits a pivot that is exactly zero.

See also Checking for singularity of matrix - #4 by stevengj

1 Like

Yes but why is this the case across different machines? All running the same version of Julia? I think this should be fixed across different machines right?

As @Oscar_Smith said, the result of a floating point calculation can vary from machine to machine depending on the order in which operations are performed. A likely culprit here is the vector width of the CPU floating point registers. I think the Apple M processors’ registers are 128 bits whereas many Intel processors are 256 bits. If the LU factorization takes advantage of the full width of the registers then different processors will evaluate the expressions in the factorization in a different order with potentially different results. Generally the differences would be negligible but for singularity it is a matter of “really very, very small in magnitude” versus zero, as @stevengj implied.

1 Like

Okok this has actually been super helpful to learn about. Thank you @dmbates @Oscar_Smith