If it is intended as a tutorial-type post, I might ask that we spend our effort improving the existing JuMP documentation instead of writing posts like this for discourse. For example, there is:
This is a forum that is explicitly about the Julia programming language. Reflecting about algorithms and theory is of course useful, but perhaps a personal blog would be a more appropriate venue?
Itās fine for this time, but people who follow the Offtopic category do not expect a sudden surge in posts about mathematical programming algorithms or solver shortcomings.
As a forum moderator, I would kindly invite you to think about how you use this specific Discourse platform, and whether it is the best way for you to contribute to the Julia community as a whole (which is of course very happy to have you!).
For instance, documentation PRs may be welcome in the JuMP ecosystem, as pointed out by @odow, but they would need to be focused on what users need. To a certain extend, this forum has the same purpose, helping and orienting people. Unfortunately for me and you, what users need does not necessarily coincide with what we find interesting at a given moment. On the other hand, a blog or social network is a perfect format to tell the world what we find interesting!
Regarding the technical blogging stack, it really depends on your target audience, and Iām not an expert either. You can set up a simple static website with GitHub Pages + Jekyll, or you can publish on websites like https://medium.com/ which has lot of math-adjacent content.
I hope this does not deter you from contributing. Itās great that you have lots of energy and advanced technical knowledge to share. Iām just trying to steer it into something that benefits all of us
Maybe Iām not having considerable expertise, at least as much as you do.
Actually I donāt have any experience, e.g. how to open a Github Pull Request.
I think Iām learning currently, especially optimization-related things.
And I can post some noteworthy points. The hope is that when in the future someone else have similar thoughts, they could have a reference, and continue the thoughts.
Therefore, I think the discourse can also serve as a place for discussion, not only asking questionsāanswering them. Because different people may give different solutions. Even for the same person, he may think one way in 2024, but may totally change his mind in 2025, due to some new findings.
Thatās completely okay! Getting involved in the Julia community is what got me to learn all this stuff.
Thereās a good guide to making a first pull request at Making a first Julia pull request | Katharine Hyatt. Itās slightly old and designed for PRs to the language itself, but PRs to the docs are much simpler. You just click the pen icon in the top-right corner of any page and follow the instructions from there. As long as you know how to write Markdown, youāre golden.
I was trying to make that point above: when people look for a reference or for points that some individual finds noteworthy, there are other places that seem more natural.
The trouble with your posts is that theyāre not questions, so theyāre not meant to teach you stuff. And theyāre very very specialized, so they may not teach a lot of other people stuff they care about. That is why Iām suggesting other venues for your learnerās musings.
Please open a new issue if you wish to discuss a different topic, this is not related to linear programming at all.
The theory behind mixed mode sparse AD has been known for three decades or so, but Iām not aware of implementations in high-level programming languages (perhaps with the exception of Matlab). I canāt say whether itās widely used, in Julia itās probably not used at all yet. Weāll release a preprint next week describing our implementation, which has some people (e.g. CasADi developers) interested, so weāll see.
By āmixed-modeā here I meant sparse Jacobian matrices using a combination of forward and reverse mode, which seems new in Julia. Of course Hessian matrices use forward-over-reverse, whether dense or sparse, and thatās implemented in several packages, but I thought the question was not about that (maybe I understood it wrong).
Somewhat relatedā¦
The āscales-wellā solution method is known as the interior point method (this topic).
And typically Newtonās method is involved, thereby Gradient/Jacobian/Hessian.
If I remember correctly, interior point methods for LP typically involve very simple barrier functions, whose derivatives are known analytically. Autodiff is most useful when derivatives are now known analytically, or would be tedious to obtain.
People find established optimization problems need a solver (e.g. Ipopt).
But it maybe somewhat astonishing that solving Ax = b itself sometimes need a solver also, and Iām currently having trouble selecting it.
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit https://github.com/coin-or/Ipopt
******************************************************************************
This is Ipopt version 3.14.17, running with linear solver MUMPS 5.7.3.
I thought MUMPS was an LP solver, which is an illusion.
It is a Ax = b solver.