A code example on primal-dual algorithm for LP

Here is a simplest julia realization of the barrier method on solving LP.
The exemplary problem is

Min x[1] + 2 * x[2] + 3 * x[3]
s.t. sum(x) == 1
     x .>= 0

My code is

function gapfun(x, z) return transpose(x) * z end
function bfc(g, N) return g / N^2 end # barrier function coefficient
# data of the problem
N = 3
A = [1 1 1.]
c = [1, 2, 3.]
# initial trial primal-dual interior point (both-side feasible)
x = [0.1, 0.4, 0.5]
l = zeros(1) # it is 1-dim only
z = [1, 2, 3.]
# termination gap tolerance
TGT = 1e-6
for ite in 0:999999
    # initial gap
    g = gapfun(x, z)
    @info "ite = $ite ā–¶ lb = $(first(l)) < $(c' * x) = ub, gap = $g"
    g < TGT && (@info "ā–¶ ā–¶ ā–¶ primal-dual algorithm converges"; break)
    BFC = bfc(g, N)
    # generate dir, i.e. Ī”-*
    v = x .* z .- BFC
    Ćø, ⅁ = x ./ z, v ./ z
    dl = ((A .* transpose(Ćø)) * transpose(A)) \ (A * ⅁)
    dz = -transpose(A) * dl
    dx = -⅁ - Ćø .* dz
    # generate stepsize
    bx = dx .< 0
    bz = dz .< 0
    (any(bx) || any(bz)) || error("Abnormal: moving towards interior")
    ax = x[bx] ./ -dx[bx]
    az = z[bz] ./ -dz[bz]
    α = 0.999 * minimum([ax; az])
    # update
    @. x += α * dx
    @. l += α * dl
    @. z += α * dz
end
3 Likes

Is there are question here?

If it is intended as a tutorial-type post, I might ask that we spend our effort improving the existing JuMP documentation instead of writing posts like this for discourse. For example, there is:

2 Likes

It’s not all about software, sometimes algorithm themselves are worth looking into.
I’ll take a look about the existing framework.

This is a forum that is explicitly about the Julia programming language. Reflecting about algorithms and theory is of course useful, but perhaps a personal blog would be a more appropriate venue?

1 Like

I put it to the offtopic category.

Are there any well-developed blog platform? please inform me. I’m not familiar with the apps (I’m not a westerner).

In my opinion, forem.julialang.org is the exact place for this type of content: The Julia Forem: What it is, why we made one, and how to use it! - Julia Community 🟣

It’s fine for this time, but people who follow the Offtopic category do not expect a sudden surge in posts about mathematical programming algorithms or solver shortcomings.
As a forum moderator, I would kindly invite you to think about how you use this specific Discourse platform, and whether it is the best way for you to contribute to the Julia community as a whole (which is of course very happy to have you!).

For instance, documentation PRs may be welcome in the JuMP ecosystem, as pointed out by @odow, but they would need to be focused on what users need. To a certain extend, this forum has the same purpose, helping and orienting people. Unfortunately for me and you, what users need does not necessarily coincide with what we find interesting at a given moment. On the other hand, a blog or social network is a perfect format to tell the world what we find interesting!

Regarding the technical blogging stack, it really depends on your target audience, and I’m not an expert either. You can set up a simple static website with GitHub Pages + Jekyll, or you can publish on websites like https://medium.com/ which has lot of math-adjacent content.

I hope this does not deter you from contributing. It’s great that you have lots of energy and advanced technical knowledge to share. I’m just trying to steer it into something that benefits all of us :slight_smile:

3 Likes

Maybe I’m not having considerable expertise, at least as much as you do.
Actually I don’t have any experience, e.g. how to open a Github Pull Request.

I think I’m learning currently, especially optimization-related things.
And I can post some noteworthy points. The hope is that when in the future someone else have similar thoughts, they could have a reference, and continue the thoughts.

Therefore, I think the discourse can also serve as a place for discussion, not only asking questions–answering them. Because different people may give different solutions. Even for the same person, he may think one way in 2024, but may totally change his mind in 2025, due to some new findings.

I’m reading your package currently, when I’m working on my algorithm discussed Ipopt does NOT memorize a best solution found so far - #5 by WalterMadelim
I find this sentence

In such cases, sparse AD is only beneficial in ā€œmixed modeā€, where we combine a forward and a reverse backend.

I guess that this sparse AD might be a novel feature?
Is it already widely applied?

1 Like

That’s completely okay! Getting involved in the Julia community is what got me to learn all this stuff.
There’s a good guide to making a first pull request at Making a first Julia pull request | Katharine Hyatt. It’s slightly old and designed for PRs to the language itself, but PRs to the docs are much simpler. You just click the pen icon in the top-right corner of any page and follow the instructions from there. As long as you know how to write Markdown, you’re golden.

I was trying to make that point above: when people look for a reference or for points that some individual finds noteworthy, there are other places that seem more natural.

The trouble with your posts is that they’re not questions, so they’re not meant to teach you stuff. And they’re very very specialized, so they may not teach a lot of other people stuff they care about. That is why I’m suggesting other venues for your learner’s musings.

Please open a new issue if you wish to discuss a different topic, this is not related to linear programming at all.

The theory behind mixed mode sparse AD has been known for three decades or so, but I’m not aware of implementations in high-level programming languages (perhaps with the exception of Matlab). I can’t say whether it’s widely used, in Julia it’s probably not used at all yet. We’ll release a preprint next week describing our implementation, which has some people (e.g. CasADi developers) interested, so we’ll see.

Not to derail the topic, but MathOptInterface implements a sparse reverse mode and then sparse forward-over-reverse for the Hessian.

By ā€œmixed-modeā€ here I meant sparse Jacobian matrices using a combination of forward and reverse mode, which seems new in Julia. Of course Hessian matrices use forward-over-reverse, whether dense or sparse, and that’s implemented in several packages, but I thought the question was not about that (maybe I understood it wrong).

1 Like

Somewhat related…
The ā€œscales-wellā€ solution method is known as the interior point method (this topic).
And typically Newton’s method is involved, thereby Gradient/Jacobian/Hessian.

If I remember correctly, interior point methods for LP typically involve very simple barrier functions, whose derivatives are known analytically. Autodiff is most useful when derivatives are now known analytically, or would be tedious to obtain.

Yes this is fair.

People find established optimization problems need a solver (e.g. Ipopt).
But it maybe somewhat astonishing that solving Ax = b itself sometimes need a solver also, and I’m currently having trouble selecting it. :innocent:

******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit https://github.com/coin-or/Ipopt
******************************************************************************

This is Ipopt version 3.14.17, running with linear solver MUMPS 5.7.3.

I thought MUMPS was an LP solver, which is an illusion.
It is a Ax = b solver.