How to take slice of array without it turning into vector

Background, I am realizing a filter bank of FIR filters that involve matrix operations on Array{ComplexF64,2}. Every time I take a slice, it turns into an Array{ComplexF64,1} and I have to reshape to get it back into an n by m array before I can do further operations on it. Is there some way I can take a slice and have it remain as nx1, rather than a 1 dimension array or do I have do a reshape after taking each slice?

julia> xin = ones(ComplexF64,16,1);

julia> typeof(xin)
Array{Complex{Float64},2}

julia> xi = xin[1:4];

julia> typeof(xi)
Array{Complex{Float64},1}

julia> xi = reshape(xi,4,1);

julia> typeof(xi)
Array{Complex{Float64},2}

julia> 

Slice in both dimensions.

xi = xin[1:4, :];

xin[1:4,1:1] is probably what you want

Could you explain why you need it to remain as a 2d array? Can’t it just be a 1d vector?

1 Like

DNF, for example:

julia> a = [1 2 3 4; 5 6 7 8; 9 10 11 12; 13 14 15 16]
4Ă—4 Array{Int64,2}:
  1   2   3   4
  5   6   7   8
  9  10  11  12
 13  14  15  16
julia> b = [ 1 1; 1 1; 1 1; 1 1]
4Ă—2 Array{Int64,2}:
 1  1
 1  1
 1  1
 1  1
julia> c = a[2,:]*b
ERROR: DimensionMismatch("matrix A has dimensions (4,1), matrix B has dimensions (4,2)")
julia> c = reshape(a[2,:],1,4)*b
1Ă—2 Array{Int64,2}:
 26  26

Gunnar, a[1:4, :] just gives a back, not a 1x4 array

julia> a[1:4, :]
4Ă—4 Array{Int64,2}:
  1   2   3   4
  5   6   7   8
  9  10  11  12
 11  12  13  14

jishnub, I think your suggestion solves my issue

julia> c = a[2:2,:]*b
1Ă—2 Array{Int64,2}:
 26  26
julia> size(a[2,:])
(4,)
julia> size(a[2:2,:])
(1, 4)

I would never have figured this out on my own. Thank you.

What about

julia> c = a[2,:]' * b
1Ă—2 LinearAlgebra.Adjoint{Int64,Array{Int64,1}}:
 26  26

?

This is much more readable and you are not messing around with your structure too much.

julia> permutedims(a[2,:]) * b
1Ă—2 Array{Int64,2}:
 26  26

How so? To me

c = a[2,:]' * b

seems much more like something straight out of a maths book, and more readable.

Also:

julia> @btime ($a)[2,:]' * $b;
  99.752 ns (3 allocations: 224 bytes)

julia> @btime permutedims(($a)[2,:]) * $b
  291.106 ns (8 allocations: 528 bytes)

Well, in larger code, spotting a ' is harder than explicitly stating it. The timing difference is negligible tbh.

Only one of them is correct for complex matrices though, which is what’s discussed here.

OK, that’s more a matter of taste, but in that case, use transpose, which doesn’t allocate.

No, this was about real valued matrices.

Yes, either transpose! or permutedims! work. Comes down to preference :slight_smile:

The original post was very clearly talking about Array{ComplexF64,2}.

Anyway, my point was to question the need to stick to matrices when slicing. You can just as well accept vectors and then use ' or transpose.

OK, but I was replying to a specific example. And switching to transpose doesn’t change the main point.

Sure, that’s a vaild point. I am also note sure why a matrix is really needed. A vector should work just fine and would be easier to debug imo if something goes wrong later (assuming this is part of some larger code).

1 Like

The examples have been a bit contradictory but I suspect that either it’s not really necessary to extract slices or things would become easier by transposing the entire problem so the filters are columns instead of rows.

1 Like

DNF, a[2,:]'*b is fine with real matrices, but with complex matrices (which is pretty well everything I use), one would need transpose(a[2,:])*b

Gunnar, my understanding is it’s better to use columns for the variable that iterates in the inside loop. For filters, I tend to reserve this for containers that have elements changing with time. Hence, the use of rows for each stage of a cascade filter. I should also mention again that all the filters are complex; for the special case of real inputs, they can be simplified to real filters, but this often adds complexity as then often one must then guarantee all poles occur in complex conjugate pairs, “clean” them when they are not exactly complex conjugate etc. So unless I need the time and or/space optimizations, I generally leave the filters as complex, at least when first developing them for a particular application. Also, I am working on filter banks, so if I want them to be usable for phased-array applications, then again I prefer to have them as complex, at least for now. My personable opinion (not shared by others as far as I can tell), is that complex signal processing is actually cleaner and simpler than “real” signal processing.