@fastmath for matrices?

I wonder if there is a package that can figure out the fastest way to multiply out a few matrices by each other using dynamic programming or any other approach. For example, if I am doing A*B*c where A is 100x100, B is 100x100 and c is 100x1, then it is more efficient to multiply B and c first then left multiply the result by A, O(n^2), than to multiply A and B first then right multiply the result by c, O(n^3). Also since the title is @fastmath for matrices, probably considering the type compatibility of matrices would be an interesting twist over the traditional complexity minimization problem. So I wonder if anyone shares my thoughts that this is actually important to make Julia orders of magnitude faster in evaluating matrix multiplications than other languages which don’t automatically optimize running time complexity?

SugarBLAS.jl?

This looks very useful, I might actually use it in my work, but it is not what I am looking for! I was talking about a macro that basically figures out where to put some parentheses in A*B*c as such A*(B*c).

LinearOperators.jl will perform the product in the order you want, but it requires that you convert your matrices to linear operators (which are really just thin wrappers).

Again looks very useful, but I am not sure if it approaches the same problem I was talking about. From a quick look, it seems like LinearOperators.jl does lazy evaluation of linear operators and some embedding of easy-to-embed operators such as transposes. But I will need to take a further look at the source to see if it is actually indirectly doing these parentheses insertions or not when all operators come from matrices. I would also prefer a solution that does not exit the world of matrices and vectors to have access to all BLAS and LAPACK functions which may or may not be supported by this new type LinearOperator.

5 Likes

Perfect, just what I was looking for!