Finding eigenvectors (simple problems)

I’m studying linear algebra and I’ve run into an issue that I’ve figured out how to solve with Julia, but I’m hoping there’s a better way. Given some simple 2 x 2 transformation matrices, I have to determine which vectors in a list are eigenvectors of the matrix. Here’s an example of a couple of simple problems and how I’ve solved them:

using LinearAlgebra

T = [1 0; 0 2]
v = ([1,0],[1,-1],[0,2],[0,3])

# Which of the following vectors in v are eigenvectors of T?

# The below function computes the angle between two vectors and checks if it's zero
# or π. My thinking was that if the angle is 0.0 (or close to it, given floating
# point error) or π that the vector must be an eigenvector because it will have the
# same span

function is_eigenvector(x,y) 
    return (
        acos(x'*y/(norm(x)*norm(y))) < 1e-7 ||
        round(π, digits=5) < acos(x'*y/(norm(x)*norm(y))) < round(π, digits=4)
    ) ?
    true :
    false
end

julia> v[findall(vec -> is_eigenvector(vec, T*vec), v)]
([1, 0], [0, 2], [0, 3])

T = [-3 8; 2 3]
v = ([-1,-1],[1,1],[0,2],[4,-1])
julia> v[findall(vec -> is_eigenvector(vec, T*vec), v)]
([-1, -1], [1, 1], [4, -1])

The problem with this approach is that sometimes I get a very, very small number (e.g., 2.1073424255447017e-8) due to what I assume is floating-point error so I’m wondering if there’s a better approach to solving this kind of problem. I’ve seen the eigen and eigvecs functions but I don’t understand how to use them in this context since eigvecs for example just returns a matrix:

T = [-3 8; 2 3]
v = ([-1,-1],[1,1],[0,2],[4,-1])

julia> eigvecs(T)
2×2 Array{Float64,2}:
 -0.970143  -0.707107
  0.242536  -0.707107

# Can this output somehow be used to determine that ([-1, -1], [1, 1], [4, -1])
# are eigenvectors?

Your assumption is correct. Since you have all integers in the problem my hint would be to look for ways to determine whether two vectors are parallel that only require integer computations.

1 Like

I am not quite sure why you need cos, you can just check eg elementwise ratios directly.

The following crude approach will allow some error when comparing floats (it is pretty much inevitable), but keep it 0 for exact arithmetic:

myϵ(x::AbstractFloat) = eps(x)
myϵ(::Union{Rational,Integer}) = 0

function is_parallel(x, y; rel_ϵ = 1)
    mi, ma = extrema(x ./ y)
    ma - mi ≤ rel_ϵ * myϵ(x[1] + y[1])
end

T = [-3 8; 2 3]
v = ([-1,-1],[1,1],[0,2],[4,-1])
is_parallel(v[1], T*v[1]) # true

Wouldn’t this approach run into issues if at some point x[i] == y[i] == 0?

There is a solution without division at all.

1 Like

Good point. Possibly something like

myϵ(x::AbstractFloat) = eps(x)
myϵ(::Union{Rational,Integer}) = 0
is_prop((x1, y1), (x2, y2), ϵ) = abs(x1 * y2 - x2 * y1) ≤ ϵ

function is_parallel(x, y; rel_ϵ = 1)
    xys = zip(x, y)
    xy1 = first(xys)
    ϵ = rel_ϵ * myϵ(prod(xy1))
    all(xy -> is_prop(xy1, xy, ϵ), Iterators.drop(xys, 1))
end

can be used to preserve exact arithmetic whenever feasible, and fall back to floats when not.

(A fun exercise for the reader is coming up with x and y that give an incorrect true because of integer overflow. But this may not be relevant to the OP.)

First, (correct me if I’m wrong) I don’t believe that testing whether a vector vᵢ is parallel to its transformed version T * vᵢ tells me whether or not it is an eigenvector of T. The vector would have to still be on the same line, just scaled up or down, or it could be pointing the opposite direction and possibly scaled up or down. If the vector has moved (it’s not on the same line), it could be parallel but it wouldn’t be an eigenvector.

I got caught up in the 2-d graphical (geometric) interpretation of these vectors which led to the over-complicated function above. The below function simply checks whether or not the transformed vector T * vᵢ is a scalar multiple of the original vector vᵢ and it appears to work just fine:

# v is an eigenvector of T if Tv = λv for some λ
# Tv and v are multiples if and only if dot(v, Tv) * v == dot(v, v) * Tv

function is_eigenvector2(v, T)
    return dot(v, T * v) * v == dot(v, v) * T * v ? true : false
end

T = [-3 8; 2 3]
v = ([-1, -1],[1, 1],[0, 2],[4, -1])

julia> v[findall(vec -> is_eigenvector2(vec, T), v)]
([-1, -1], [1, 1], [4, -1])

Well, if vector is collinear to its transformed version than they lie in the same vector space and they are eigenvectors, by definition.

I think that direct vector comparison can fail due to numerical errors and one should be very careful how to compare them. Scalar approach should be more stable and here is my solution

myϵ(x::AbstractFloat) = eps(x)
myϵ(::Union{Rational,Integer}) = 0

function iscollinear(x, y, releps = 1)
    abs(dot(x, y)^2 - dot(x, x)*dot(y, y)) <= releps * myϵ(x[1]) * dot(x, x) * dot(y, y)
end

v[findall(vec -> iscollinear(vec, T*vec), v)]

Of course myϵ part can be done better, one can do type promotion or some other things like that, but the main idea is that you basically calculate sin^2 between vectors x and y, so it is just one number and one can easily control its error. Also this solution scales to any number of dimensions, only problem is overflow in case of Int which can be avoided by many different methods.

Edit: yet on the other hand, since this formula include subtraction of two potentially large numbers this can easily produce instability of its own. I guess only real world experimentation can show what approach is better.

1 Like

By the way, since == already produce boolean expression, there is no need to add ternary operator here. Your is_eigenvector2 function is equivalent to

function is_eigenvector2(v, T)
    return dot(v, T * v) * v == dot(v, v) * T * v
end
1 Like

I don’t know what text you are using to study linear algebra, but if you want to think of vectors as “arrows” between points, then what is usually called a “vector” is an equivalence class of all arrows that are parallel, have the same length and direction. So, basically, just think of them as arrows from the origin you can move around.

Incidentally, as much as I like programming in Julia, I would recommend pencil & paper to follow math textbooks. Most of the exercises should be computationally trivial and going to a computer is usually just a distraction.

2 Likes

This is fine. You could also have backtracked from your original angle computation like this:

acos(x'*y/(norm(x)*norm(y))) == 0 or π

x'*y/(norm(x)*norm(y)) == 1 or -1

x'*y == ±norm(x) * norm(y)

(x'*y)^2 == norm(x)^2 * norm(y)^2

(x'*y)^2 == (x'*x) * (y'*y)

and now it’s only integers involved. If you compare with your new solution you will see that it’s actually very similar, just a dot operation away.

But in the 2D case I would just compute the cross product.

2 Likes

I would recommend pencil & paper to follow math textbooks

I’m doing both :wink: For this particular lesson the idea was to do the transformation and then plot the transformed vectors in a 2d plane to determine visually if they are eigenvectors or not. Also, the course I’m following utilizes Python, which I’ve never used, so I’m trying to figure out how to solve some of the problems from scratch with Julia rather than translate Python to Julia.