Linear Algebra tricks?

OK – I tested it:

``````julia> M1 = Eye{Int}(5);
julia> M2 = I(5);
julia> M3 = Diagonal(fill(1,5));

julia> sizeof(M1), sizeof(M2), sizeof(M3)
(8, 8, 8)

julia> sizeof(M1.diag), sizeof(M2.diag), sizeof(M3.diag)
(8, 5, 40)
``````

… and then I tested it more:

``````julia> M1 = rand(-9:9,3,7)
3×7 Array{Int64,2}:
0  -6   1   3  6  -7   9
-4  -8   8  -5  8  -9   9
8   2  -7  -8  0  -5  -5

julia> M2 = [M1 I]
3×10 Array{Int64,2}:
0  -6   1   3  6  -7   9  1  0  0
-4  -8   8  -5  8  -9   9  0  1  0
8   2  -7  -8  0  -5  -5  0  0  1

julia> sizeof(M1)
168

julia> sizeof(M2)
240

julia> M3 = [M1 Eye(3)]
3×10 SparseArrays.SparseMatrixCSC{Float64,Int64} with 22 stored entries:
[2,  1]  =  -4.0
[3,  1]  =  8.0
[1,  2]  =  -6.0
[2,  2]  =  -8.0
[3,  2]  =  2.0
[1,  3]  =  1.0
[2,  3]  =  8.0
[3,  3]  =  -7.0
[1,  4]  =  3.0
[2,  4]  =  -5.0
[3,  4]  =  -8.0
[1,  5]  =  6.0
[2,  5]  =  8.0
[1,  6]  =  -7.0
[2,  6]  =  -9.0
[3,  6]  =  -5.0
[1,  7]  =  9.0
[2,  7]  =  9.0
[3,  7]  =  -5.0
[1,  8]  =  1.0
[2,  9]  =  1.0
[3, 10]  =  1.0

julia> sizeof(M2), sizeof(M3)
(240, 40)
``````

So – the FillArrays package figured out that `M1` has two zeros, so 19 nonzero elements + the three diagonal elements of `Eye(3)`. That is clever.

1 Like

@BLI isn’t this what you were looking for?

``````julia> Matrix{Int}(I,3,3)
3×3 Array{Int64,2}:
1  0  0
0  1  0
0  0  1

julia> Matrix{Float64}(I,3,3)
3×3 Array{Float64,2}:
1.0  0.0  0.0
0.0  1.0  0.0
0.0  0.0  1.0

julia> Matrix{ComplexF32}(I,3,3)
3×3 Array{Complex{Float32},2}:
1.0+0.0im  0.0+0.0im  0.0+0.0im
0.0+0.0im  1.0+0.0im  0.0+0.0im
0.0+0.0im  0.0+0.0im  1.0+0.0im

# etc. etc.
``````
1 Like

You can also use function `Base.summarysize` which compute the amount of memory, in bytes, used by all unique objects reachable from the argument.

``````julia> M1 = Eye{Int}(5);
julia> M2 = I(5);
julia> M3 = Diagonal(fill(1,5));

julia> Base.summarysize(M1), Base.summarysize(M2), Base.summarysize(M3)
(8, 53, 88)
``````

and also

``````julia> M1 = rand(-9:9,3,7);
julia> M2 =  [M1 I];
julia> M3 = [M1 Eye(3)];
julia> Base.summarysize(M1), Base.summarysize(M2), Base.summarysize(M3)
(208, 280, 616)
``````
3 Likes

Well, I want to make a function dependent on what types the user has chosen, so I guess in my function, I’ll parameterize the type.

Final question: if the function doesn’t already exist in the Julia eco-system, I’d like to make a function to “swap” permutation vectors. What I mean… suppose I have a vector `cp` (“column permutation”) such that `A[:,cp] = A*P` where `P = I(length(cp))[:,cp]` is a permutation matrix. Then I’d like to compute the permutation vector `rp` so that `A[rp,:] = P*A`. The following function will do the trick:

``````function permswap(v::Vector{T}) where T <: Integer
n = length(v)
vsort = sort(v)
if vsort == 1:n
return I(n)[:,v]*vsort
else
error("Argument is not a permutation vector")
end
end
``````

Since using this function twice brings be back to the original, i.e., `permswap(permswap(v)) = v`, it can be used both for converting `cp` to `rp` and vice versa.
Question – since I’m a beginner when it comes to making functions in Julia: what would be a better name for this function? Would `perm_swap` be better? Or `permutation_swap`? Or `swapperm`? Or something else? (Note: there is already a `randperm` function.)

It works that way because `Eye(5)` is shorthand for `Diagonal(Ones(5))`, where `Ones(5)` is a lazy vector of all ones. So anything `Diagonal` can do, so can `Eye`, in this case making a sparse array.

Note FillArrays.jl types are immutable.

3 Likes

`invperm(v)` does what you want, I think. (Transposing a permutation matrix is equivalent to inverting it, since permutations are orthogonal/unitary.)

The implementation of `invperm` might be worth looking over — it should be quite a bit faster than your `permswap` function. (Your version constructs an `n x n` dense matrix `I(n)`, then constructs a new matrix with re-ordered columns, then performs a matrix-vector multiplication, for O(n²) space and time complexity, whereas `invperm` is O(n).)

4 Likes

Indeed! `invperm()` is some 5-6000 times faster than my hobby algorithm. [How does one go about finding such routines?]

Eg

``````using LinearAlgebra
apropos("permutation")
``````
2 Likes