MPLAPACK (multiprecision blas+lapack)

MPLAPACK works with any of these multiprecision libs: GMP, MPFR+MPC, libquadmath, and David Bailey’s DD/QD library.
Here is the user guide and the repository,

All BLAS routines are available using multiple precision arithmetic.
LAPACK functions are available with multiple precision Real arithmetic. (Complex support is planned for v2):

  • Linear Equations
  • Linear Least Squares (LLS) Problems
  • Generalized Linear Least Squares (LSE and GLM)
  • Standard Eigenvalue and Singular Value Problems
  • Symmetric Eigenproblems (SEP)
  • Nonsymmetric Eigenproblems (NEP)
  • Singular Value Decomposition (SVD)
  • Generalized Eigenvalue and Singular Value Problems
  • Generalized Symmetric Definite Eigenproblems (GSEP)
  • Generalized Nonsymmetric Eigenproblems (GNEP)
  • Generalized Singular Value Decomposition (GSVD)

Ver 1.0 (2021-Sep-28) is released under the "2-clause" BSD license.

5 Likes

Would be cool to get that in Julia. I bet we could be faster and 1/10th loc.

1 Like

Don’t we have most of it in Julia already, e.g. with GenericLinearAlgebra.jl and GenericSchur.jl?

1 Like

That is a question! I have not been using BLAS or LAPACK funcs, dunno.

I’m not talking about the BLAS/LAPACK API (which makes little sense to re-implement in Julia), but rather the functionality.

GenericLinearAlgebra doesn’t do any of the blocking or repacking that is necessary to achieve high performance (even for slower datatypes). An ideal version would be able to do all performance tricks that still matter while keeping the genericness.

Also, it’s probably worth implementing sub-cubic algorithms since they do better when multiplication is more costly than addition which is true for most of the more complex number types.

Blocking/repacking/re-ordering only matters for problems that are memory-bound (for simpler implementations). With arbitrary-precision arithmetic, most of these algorithms should be compute-bound, in which case you might as well use LINPACK/EISPACK-style “textbook” triple-loop algorithms.

With arbitrary precision, you are probably right, but I bet that for something like DoubleDouble the packing still matters. Also, as the blocking becomes less important, the sub-cubic methods become more important, so fanciness of some kind is still necessary for good performance.

There are order-of-magnitude speedups to be found by doing arbitrary-precision linear algebra using atomic dot products (small sizes) and block matrix multiplication (large sizes), converting FP matrix multiplication to exact matrix multiplication over Z where a multimodular + Strassen approach can be used. See my paper [1901.04289] Faster arbitrary-precision dot product and matrix multiplication and the blog post High-precision linear algebra in Julia: BigFloat vs Arb

6 Likes