Syntax for constructing a 1x1 matrix

Before taking transposes seriously one could conveniently use e.g. [5]' to build a 1x1 matrix containing 5. In v0.6+ this results in a RowVector instead. Do we have any compact syntax for building initialized 1x1 matrices now?

1 Like

hcat(1) :frowning:

That’s not half-bad! Thanks!

The following also works

julia> [5][:,:]
1×1 Array{Int64,2}:
 5

Still ugly… :wink:

1 Like

But functions can be defined for a reason (and compilers inline for a reason), so:

julia> mat(sc) = reshape([sc],1,1)
mat (generic function with 1 method)

julia> mat(5)
1×1 Array{Int64,2}:
 5

and better yet to consider:

julia> using StaticArrays

julia> smat(sc) = @SMatrix [sc]
smat (generic function with 1 method)

julia> smat(5)
1×1 StaticArrays.SArray{Tuple{1,1},Int64,2,1}:
 5

The little @code_llvm I looked at, made these methods preferable and ultimately more readable and correct.

1 Like

The smat function indeed provides an extremely compact compilation, followed by hcat, followed by reshape (which is quite suboptimal). Marked this as the best solution, thanks!

EDIT: in cases where a conventional Matrix instead of SMatrix is preferred, the function

mat(s) = Matrix(@SMatrix [s])

is surprisingly efficient too, better even than hcat

julia> @code_llvm mat(1)

define %jl_value_t addrspace(10)* @julia_mat_62521(i64) #0 !dbg !5 {
top:
  %"#temp#" = alloca %SArray.10, align 8
  %1 = getelementptr inbounds %SArray.10, %SArray.10* %"#temp#", i64 0, i32 0, i64 0
  store i64 %0, i64* %1, align 8
  %2 = addrspacecast %SArray.10* %"#temp#" to %SArray.10 addrspace(11)*
  %3 = call %jl_value_t addrspace(10)* @julia_convert_62498(%jl_value_t addrspace(10)* addrspacecast (%jl_value_t* inttoptr (i64 4464354096 to %jl_value_t*) to %jl_value_t addrspace(10)*), %SArray.10 addrspace(11)* nocapture readonly %2)
  ret %jl_value_t addrspace(10)* %3

(I just love StaticArrays.jl !)

You do have a function call in there so not sure how you can say whether it is efficient or not.
Benchmarking them I found that hcat and your mat had pretty much identical performance .

Oops, you’re completely right. I’m no good reading llvm :grin:. Actually hcat is in 25% faster than mat in my system! Still, smat is way faster than both.

A side note, it might be a good idea to avoid depending on the extra package StaticArrays if not otherwise needed.

using BenchmarkTools
julia> @btime reshape([1],1,1)
  63.990 ns (3 allocations: 192 bytes)
1×1 Array{Int64,2}:
 1

julia> @btime fill(1, (1,1))
  29.674 ns (1 allocation: 96 bytes)
1×1 Array{Int64,2}:
 1

julia> @btime hcat(1)
  33.067 ns (1 allocation: 96 bytes)
1×1 Array{Int64,2}:
 1

The difference between hcat and fill is small, but consistent. I also find fill more elegant.

fill is my favorite too.

With a function:

mat(s)  = begin
       m = Array{typeof(s)}(1,1)
       @inbounds m[1] = s
       return m
end

another 10% can be squeezed (on my system, Julia version, and LLVM version):

julia> @btime mat(5)
  26.736 ns (1 allocation: 96 bytes)
1×1 Array{Int64,2}:
 5

julia> @btime fill(5,(1,1))
  28.640 ns (1 allocation: 96 bytes)
1×1 Array{Int64,2}:
 5

And the @code_llvm looks nice and short.

1 Like

I typically use diagm([5]).

This works, for array literals:

[5;;]

The number of semicolons indicate the rank of the final tensor, so for instance, [5;;;] is a 1x1x1 tensor.

3 Likes