# Why same calculations on matrices yield different results?

Hello all,

I am fairly new to Julia but I come from the Python world. While I am still learning the basic machinery of the `Matrix`. I noticed a small difference in the way a “self-declared identity” matrix is being used in the following matrix operations:

``````julia> self_defined_I = [1. 0.; 0. 1.]
2×2 Matrix{Float64}:
1.0  0.0
0.0  1.0

julia> x = [1 2; 3 4]
2×2 Matrix{Int64}:
1  2
3  4

julia> inv(x)
2×2 Matrix{Float64}:
-2.0   1.0
1.5  -0.5

julia> x^(-1)
2×2 Matrix{Float64}:
-2.0   1.0
1.5  -0.5

julia> self_defined_I/x
2×2 Matrix{Float64}:
-2.0   1.0
1.5  -0.5

julia> x\self_defined_I
2×2 Matrix{Float64}:
-2.0   1.0
1.5  -0.5

julia> inv(x) == x^(-1) == self_defined_I/x == x\self_defined_I
false

julia> inv(x) == x^(-1) == x\self_defined_I
true

julia> x\self_defined_I == self_defined_I/x
false
``````

However, when using the `isapprox` operator we get:

``````julia> x\self_defined_I ≈ self_defined_I/x
true
``````

Also, when using the `LinearAlgebra.I` identity matrix such kind of inconsistencies don’t happen even with the exact `==` operator:

``````using LinearAlgebra

julia> LinearAlgebra.I == self_defined_I
true

julia> inv(x) == x^(-1) == x\self_defined_I == LinearAlgebra.I/x == x\LinearAlgebra.I
true
``````

I can see that all the results are of type `Matrix{Float64}` then why specifically the `self_defined_I/x` operation results in a matrix that has a different precision?

``````julia> self_defined_I/x - inv(x)
2×2 Matrix{Float64}:
-4.44089e-16   2.22045e-16
2.22045e-16  -1.11022e-16

julia> x\self_defined_I - inv(x)
2×2 Matrix{Float64}:
0.0  0.0
0.0  0.0
``````

I would also like to ask if it is something one should take care or be worried about and if this could have the potential to propagate in other kinds of matrix operations and cause significant impact in accuracy?

Another off category question - from a design perspective does it make sense to force the output of `x*inv(x)` to `LinearAlgebra.I`?

P.S. Following are the runtime details:

``````julia> versioninfo()
Julia Version 1.7.2
Commit bf53498635 (2022-02-06 15:21 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Core(TM) i5-10210U CPU @ 1.60GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-12.0.1 (ORCJIT, skylake)

``````

To start with, it doesn’t really make sense to compare floats with `==`. The difference you’re seeing is on the order of `eps`, so the only way it changes the result of a calculation is if that calculation is chaotic.

And you don’t want to be inverting matrices anyway, use `\` directly or save an LU decomposition if you need to repeat it many times.

`LinearAlgebra.I` is of a specific type, so the compiler knows that `anything\I = inv(anything)`. It can’t know that about your home rolled `I` unless you make that information available via the type system.

It’s a similar story for `x*inv(x)`, the compiler cannot know that `inv(x)` is related to `x` unless you somehow made `inv(x)` a lazy type that upon multiplication with `y` checked if `x ≈ y`, which seems overkill since most of the time it will be false.

2 Likes

I’m left wondering how and why floating point behavior in Python is different from Julia? AFAIU `I` in Julia is a proxy for an identity matrix and dynamic dispatch can resolve `I * A = A` directly without using matrix multiplication.

Edit: granted, `numpy.eye` seems to generate the full matrix…

I doubt there is a real difference in the floating point behavior. It’s probably rather that Python doesn’t have `UniformScaling`, the type of `LinearAlgebra.I`. When OP rolls their own `I`, they remove this information from the type system, so dispatch does need to perform the multiplications.

1 Like

Thanks a lot for the answers !