Utilizing LP constraints structure with Tulip?

I’m solving many linear optimization problems that differ in variable bounds, but share the same constraints. There is no objective, i.e, I’m just looking for a feasible solution in each case. I’m using MultiFloats.Float64x8 arithmetic (basically a tuple of eight Float64 values).

The Tulip.jl solver seems to be able to be customized for exploiting structure in the constraint matrix, but I’d appreciate some pointers as to how to accomplish that.

This is the LP formulation notation used in Tulip’s documentation:
formulation

In my case:

  1. l and u (the variable bound vectors) are both finite
  2. b is zero
  3. Elements of A are integers, with magnitudes varying from 0 and 1 to “so big that BigInts are necessary to exactly represent it”
  4. The number of variables that I’m mostly interested in is 65536 (n == 2^16)
  5. There’s a parameter d, such that m == n - d. d can range from one to about thirty.
  6. The matrix A is a block matrix formed by concatenating matrices A0 and I(m), so A == hcat(A0, I(m)). I(m) is an identity matrix of order m.
  7. A0 is a rectangular dense matrix with m rows and just d columns
  8. Elements of A0 are nonzero.

Are there any additional properties of my problems that I should look for?

Any advice for how to utilize this structure of my problems, if it looks like there’s some structure that might be exploitable?

CC @mtanneau

1 Like

Hi! Thanks for reaching out :slight_smile:

Do you have an example of code we could run to test it out?

I see two components in your post: you’re using extended-precision arithmetic, and you’re wondering whether you can speedup the linear algebra.

Note about the first part: I’ve encountered issues with MultiFloats because of how NaN and Inf values are treated. It may not be an issue in your case because none of your bounds are infinite, but I would disable presolve just to be sure.

About the linear algebra.
If you want to implement your custom algebra, you need the following ingredients:

  • A custom type for your matrix. It will be assembled at this line:

    A = construct_matrix(mfact.T, m, n + nslack, aI, aJ, aV, mfact.options...)
    

    which, in a nutshell, will call your constructor and pass it the coefficients in COO format.

  • A custom KKT solver that can solve augmented systems, which implements the KKT.update! and KKT.solve! functions.

Making a custom matrix type is the easy part. What is more involved is coding the linear solver. Namely, you need code that can solve linear systems of the form

\begin{bmatrix} -H & A^{T}\\ A & Q \end{bmatrix} \begin{bmatrix} x\\ y \end{bmatrix} = \begin{bmatrix} \xi_{d} \\ \xi_{p} \end{bmatrix}

where H, Q are diagonal matrices with positive elements.

If I expand this system in your case, you would get:

\left[ \begin{array}{ccc} -H_{0} & & A_{0}^{T}\\ & -H_{s} & I \\ A_{0} & I & Q \end{array}\right] \left[ \begin{array}{c} x_{0}\\ s\\ y \end{array} \right] = \left[ \begin{array}{c} \xi^{0}_{d}\\ \xi^{s}_{d}\\ \xi_{p} \end{array} \right]

where I split the variables x = (x_{0}, s) according to the structure your described.
The sparse linear algebra used by default will compute an LDL^{T} factorization of the above matrix, without exploiting any structure.
In your setting, you may want to exploit the fact that A_{0} is dense and very tall (many rows, few columns), and end up computing a dense factorization of something that looks like A_{0}^{T}DA_{0} + H_{0} for some diagonal matrix D.

I coded something similar for (what I called) unit block-angular matrices of the form

A = \left[ \begin{array}{ccccc} B_{0} & B_{1} & B_{2} &... & B_{R}\\ 0 & e^{T} & \\ 0 && e^{T} \\ \vdots &&& \ddots\\ 0 &&&& e^{T}\\ \end{array} \right]

at the time, I observed speedups of about 10x compared to default sparse linear algebra. This was in Float64 though, so a big chunk of speedup came from using dense BLAS calls under the hood.

1 Like