Implementing the Wedge Product

First off, does anybody know of an existing implementation of the wedge product in Julia? I performed a pretty expensive search and came up empty.

For my application I need to compute

\wedge_{i=1}^{n-1}\boldsymbol{v}_i,\quad\boldsymbol{v} \in \mathbb{R}^n

I know that the i^{\mathrm{th}} element of the wedge product can be computed as the cofactor of the (n,i) entry of M, where M is defined as

M= \begin{bmatrix} \boldsymbol{v}_1^{\top}\\ \vdots \\ \boldsymbol{v}_{n-1}^\top \\ \boldsymbol{1}^\top \end{bmatrix}

Currently, I’ve implemented this as

using LinearAlgebra
using RecursiveArrayTools

function wedge(v::AbstractArray{T}) where {T<:AbstractArray}
    n = length(v[1]) 
    M =  VectorOfArray(v)'
    T([ (-1)^(n+j)*det(M[:,filter(x->x≠j,collect(1:n))]) for j ∈ 1:n]    )
end

For n=3, the wedge product should be equivalent to the cross product.

n = 3
data = [rand(n) for i ∈ 1:n-1]

maximum(abs.(wedge(data) - cross(data...)))  # 2.7755575615628914e-17

However, this seems horribly inefficient for when n is large as the determinate of an n-1\times n-1 is computed n times.

Does anybody know of a more eloquent approach? One thought is to decompose M into a form where computing the determinate is trivial, e.g. LU decomposition, and then perform rank-1 updates.

Extend your n x (n-1) matrix A with a random norm-one vector to an n x n matrix. Gram-schmidt this. Take the last column, and multiply with the product of norms of the first n-1 columns.

Because julia is row-major, mentally rename all mentions of “rows” and “columns” in the above paragraph.

?

Ok, depends on how your input and output is structured and how your gram-schmidt wants its input/output to be structured (I always forget which index position is upstairs/downstairs and which is called row or column; I can only remember fast/slow = first/second).

But I don’t see a gram-schmidt in LinearAlgebra anyway?

If I were to implement a very naive gram-schmidt, I would need fast sequential access to mathematical columns, so I’d make A[:,i] a mathematical column, and expect n x n-1 input. Ok, you’re right, I was confused and talked nonsense.

Isn’t Gram-schmidt inefficient? I.e., \mathcal{O}(n^3)

Unless you go Strassen, your algorithm was n^4. n^3 is better than n^4, right?

I’m not claiming that this is an especially clever strategy, just that it is much better.

So the code would be

julia> using LinearAlgebra
julia> for N=2:15
              rr0=rand(N,N);
              V = rand(N);
              W = rand(N);
              rr0[:,end]=W; res = det(rr0)
              rr0[:,end] = V
              q,r = qr(rr0);
              S=-q[:,end]* prod(diag(r)[1:N-1]) * (-1)^N;
              @show N, res ≈ dot(S, W)
       end
(N, res ≈ dot(S, W)) = (2, true)
(N, res ≈ dot(S, W)) = (3, true)
(N, res ≈ dot(S, W)) = (4, true)
(N, res ≈ dot(S, W)) = (5, true)
(N, res ≈ dot(S, W)) = (6, true)
(N, res ≈ dot(S, W)) = (7, true)
(N, res ≈ dot(S, W)) = (8, true)
(N, res ≈ dot(S, W)) = (9, true)
(N, res ≈ dot(S, W)) = (10, true)
(N, res ≈ dot(S, W)) = (11, true)
(N, res ≈ dot(S, W)) = (12, true)
(N, res ≈ dot(S, W)) = (13, true)
(N, res ≈ dot(S, W)) = (14, true)
(N, res ≈ dot(S, W)) = (15, true)

Then fix the unneeded spurious allocations.

I was just pointing out that Julia is column-major.

Gram-Schmidt is not numerically stable. There are numerically stable versions, but usually the similar algorithm that should be attempted to be used is the QR factorization.

Gram-Schmidt is not numerically stable.

Is this because of floating-point error or is there another reason?

It’s algorithmic and due to floating point error. You need to stabilize it in most applications.

1 Like

Modified Gram-Schmidt is usually the answer.

4 Likes

This specific problem can be most easily done with the columns of the inverse matrix multiplied by the determinant, transposed. Was running into the same problem just now.

1 Like