I have a function f that can take matrices of various sizes as inputs. When I try to call it with matrices of size d \times 1, produced by m[:, k] where k is an integer, they get converted to vectors of size d. How to prevent this? This must have been discussed many times but I could not find any solutions. Below is an MWE:
function f(x1, x2)
d, n1 = size(x1)
d, n2 = size(x2)
out = zeros(d, n1 * n2)
end
function test()
x1 = randn(2, 3)
x2 = randn(2, 5)
# Works fine
y = f(x1, x2[:, 1:2])
# Does not work
y = f(x1, x2[:, 1])
display(y)
end
test()
Many routines in Base accept either a matrix or a vector, where the latter is treated as a matrix with one column. This is a pretty convenient pattern for linear-algebra code.
The way to do this is to simply pass dimension arguments to size, for example changing your code above to:
d, n1 = size(x1,1), size(x1,2)
The size function returns 1 for dimensions after the last one, so size(x1,2) will be 1 if x1 is a vector (1d array). Note if x1 is a vector, you are also allowed to index it as x1[i,1], i.e. you can pass 1 for “extra” dimensions, to treat it as a matrix with one column.
You can even declare this explicitly in your argument signature as function f(x1::AbstractVecOrMat, x2::AbstractVecOrMat), where AbstractVecOrMat{T} is a built-in type alias for Union{AbstractVector{T},AbstractMatrix{T}}.
Yeah, it’s probably a bad idea now that I think about it a bit more. It would require returning a special Dims object that acts like an infinite tuple.