Coefficients of combination of matrices

In general there may be no solution in which the LHS and RHS are “almost equal”. There will be some errors, which may or may not be small. The question, how do you want the errors to be distributed?

  • Do you want them to be as small as possible “on average”? Then probably minimize the L^2 (sum-of-squares) error. This gives you a least-squares (QP) problem.
  • Do you want the biggest individual error to be as small as possible? Then probably minimize the L^\infty (maximum norm). This gives you an LP.

If you’re not sure, I would just try the L^2 norm — least-squares minimization is the most common choice and has the easiest algorithms.

Sure. As @albheim suggested, I would start by “vectorizing” your tensors into column vectors, since the tensor shape seems to be irrelevant to your problem. Let d = vec(D), and let W be the matrix whose columns are the “vectorized” versions of all of the tensors on the LHS, and flip signs so that all of the terms on the LHS are + (no -). Suppose you have n such tensors, so that you have n weights x and W has n columns. Then you are solving:

\min_x \Vert d - Wx \Vert \\ \mbox{subject to }\sum_k x_k = 1, \; 0 \le x_j \le 1

which is a convex problem, a constrained least-squares problem in the L2 norm. If you eliminate x_n by setting x_n = 1 - \sum_{k=1}^{n-1} x_k, then it simplifies to a box-constrained problem:

\min_\tilde{x} \Vert \tilde{d} - \tilde{W}\tilde{x} \Vert \\ \mbox{subject to }0 \le x_j \le 1\mbox{ for }j=0,\ldots,n-1

where \tilde{x} is the first n-1 rows of x, \tilde{W} is the first n-1 columns of W minus the last column of W, and \tilde{d} is d minus the last column of W. In the L2 norm, there are lots of approaches to box-constrained least-squares, some of which are linked in the thread I mentioned above.

1 Like