Should `reshape` have an option for row-major order?

Well, Julia doesn’t internally distinguish between column-major and row-major matrices like some languages do (internally, everything’s column-major), and we shouldn’t try to shove that distinction in through the back door. So only the argument’s raw linearly ordered data should matter, not whether we are thinking of it as a row-major or column-major matrix.

With that in mind, I don’t see how col_major(X, size...) would be any different from reshape(X, size...), so I don’t think we need it. I think the only useful function would be

row_major_reshape(X::AbstractArray, size...) = permutedims(reshape(X, reverse([size...])...), length(size):-1:1)

(I think it’d be better to require an explicit size argument, since I don’t think most use cases would necessarily want to keep the matrix size the same.)

Bumping this old thread as it’s a top search engine result, and I was confused until working through an example, as I don’t think the permutedims example above is correct.

Suppose you want to reshape a tensor from Batch x Channel x Height x Width to (Batch*Channel) x Height x Width

In python, you have:

>>> import numpy as np
>>> B = np.zeros([2,4,4,3])
>>> A = np.arange(1,49).reshape(4,4,3)
>>> B[1] = A
>>> B[1,0,0]
array([1., 2., 3.])
>>> C = B.reshape(-1,4,3)
>>> C[4,0]
array([1., 2., 3.])

The equivalent in Julia is:

julia> B = zeros(2,4,4,3);
julia> A = permutedims(reshape(collect(1:48),3,4,4),[3,2,1]);
julia> B[2,:,:,:] .= A;
julia> B[2,1,1,:]
3-element Array{Float64,1}:
 1.0
 2.0
 3.0
julia> C = permutedims(reshape(permutedims(B,[4,3,2,1]),3,4,:),[3,2,1])
julia> C[5,1,:]
3-element Array{Float64,1}:
 1.0
 2.0
 3.0

Note that we need the inner permutedims or else the result does not match numpy. If someone has a more elegant solution please do post here for posterity / search engine stragglers :slight_smile:.

1 Like

I’m bumping this. I recently wanted to reshape an array for plotting. What I needed, was simply reshaping array “rowwise” (as in R or Numpy). I’ve spend at least 20 minutes for searching how to do this in Julia and ended up in writing two nested for loops. Having ordering = "rowwise" would be so much more obvious and simple. Sure, it is not a crucial feature, but as someone who used R & python I simply expected that it “just” exists.

All it takes is someone who is sufficiently bothered by this to submit a pull request, or failing that just to make a serious proposal for what the API should look like. I’d shoot down ordering = "rowwise" because that’s very 2d-focused, and Julia’s array infrastructure is far more flexible than that.

Personally I don’t understand what’s wrong with PermutedDimsArray(reshape(A, dims), order).

8 Likes

To some extent you’re comparing apples and oranges here. Since Python’s “fastest” dimensions are last, but Julia’s are first, why don’t you just think of the Julia equivalent of being a Width x Height x Channel x Batch rather than the other way around? Then your code would be just as simple as the NumPy code. Conversely, NumPy would probably feel awkward with this ordering.

But here’s an alternative:

C = zeros(8, 4, 3);
B = reshape(PermutedDimsArray(C, (3, 2, 1)), 3, 4, 4, 2);
B[:,:,:,2] = 1:48

Now check C and you’ll see it has the same value as in your example. By making views rather than copies we gain the ability to manipulate the parent via the dimension ordering in the view.

14 Likes

thanks, that’s really helpful! I think I’ve been a bit indoctrinated by a decade of row-major library conventions and hadn’t considered simply reversing order of dimensions. I also wasn’t aware of PermutedDimsArray, so thanks for pointing that out.

3 Likes

The combo PermutedDimsArray(reshape(...),...) ist just awesome!
Saved my day today.

Which other language provides this level of awesomeness?

3 Likes

Hello,

the permutedims method seems to be the correct way to do row-major reshaping. However I’m here to ask some questions about some ambiguities.

The first ambiguity regards the double application of the reshape, which should return the original matrix, but this is not the case

rm_reshape(Q::AbstractArray{T}, shapes...) where {T} = permutedims(reshape(Q, reverse(shapes)...), (length(shapes):-1:1))

A = reshape(1:6*6, 6, 6)
B = rm_reshape(A, 2,3,2,3)
C = rm_reshape(A, 6,6)

Here C is different from A.

Now I explai the second ambiguity.
I have a working and simple python code, which is able to take different blocks of a matrix obtained from kronecker product

import numpy as np
# The matrix A simulates the kronecker product between a (2x2) and (3x3) matrix.
# Indeed, the final shape is (6x6)
A = np.reshape(np.arange(1,37), (6,6)).T

# With the matrix B I can see the different blocks. 
# I need an extra dimension of 1 to make it work properly.
B = np.reshape(A, (1,2,3,2,3))

then B[0,:,i,:,j] is the block corresponding to the element (i,j) of the second matrix, and B[0,i,:,j,:] is the block corresponding to the element (i,j) of the first matrix.
However in Julia

A = reshape(1:6*6, 6, 6)
B = rm_reshape(A, 1,2,3,2,3)

I obtain that B[1,:,1,:,1] and B[1,1,:,1,:] are exactly the transpose of the real blocks I found with python.

Am I doing something wrong?

No, because numpy.reshape defaults to row-major order (order='C'), whereas Julia’s reshape works according to column-major order (corresponding to order='F' with numpy.reshape).

It is best to simply get used to the native column-major layout when using Julia, i.e. reverse the order of your dimensions compared to what you did in Numpy.

2 Likes