# Library for finding fixed points of a function?

#1

A large number of algorithms in my field involve finding the fixed point of an operator. Are there any good libraries out there that do this (i.e. properly generic, testing out corner cases, etc.) Of course, a simple version of this can be written within the algorithm itself, but I would rather train students to think things through mathematically, and having a library means fancier methods could be used in the future for the fixed point iteration. (Mathematica’s version is http://reference.wolfram.com/language/ref/FixedPoint.html but I don’t think it gives enough control on the norm, in my opinion).

Is there anything? An example of a mediocre example of what I am interested in is below

``````function fixedpoint(f, x0; residualnorm = (x -> norm(x,Inf)), tol = 1E-10, maxiter=100)
residual = Inf
iter = 1
xold = x0
while residual > tol && iter < maxiter
xnew = f(xold)
residual = residualnorm(xold - xnew);
xold = xnew
iter += 1
end
return xold
end

#Calling it
f(x) = 0.5 * x + 1.0
fixedpoint(f, 1.0)
``````

In terms of other stuff I might expect in the library: Anderson acceleration and Newton’s method of fixed point iteration would be great (where the latter might use AD).

#2

NLsolve.jl. It has an Andersson acceleration method. But also, `f(x)=x` means that `g(x) = f(x)-x = 0` is a rootfinding problem, so that’s how you’d use Newton’s method on that.

#3

But nothing standard you know of solves a standard iterative fixed point problem (assuming a contraction mapping, etc.)?

I believe Haskell also has this sort of thing (https://en.wikibooks.org/wiki/Haskell/Fix_and_recursion but I could be wrong). Seems like the sort of thing that could really belong in the standard library for Julia, if a suitable library doesn’t already exist.

#4

No, I mentioned NLsolve.jl has Anderson acceleration for this.

But if you want Newton, just do `x->f(x)-x`. Maybe ask for a convenience function if necessary but I think showing students how to make use of a rootfinder for fixed point acceleration is a simple (and common) enough trick that they should know or learn it. That trick is probably a good part (a) to a HW problem IMO .

#5

Note the two very different notions of fixed point: there’s the fixed point of a numerical function (with floats and tolerances, which seems to be what you’re interested in) and there’s the fixed point as a programming/theoretical language construct, see eg https://en.wikipedia.org/wiki/Fixed-point_combinator. One is about numbers, the other about expressions (as far as I understand)

If you really want the pure iteration xn+1 = f(xn) and you don’t want to do it by hand for some reason, you can use NLSolve’s Anderson acceleration and just set the history parameter to 0.

#6

Good point on fixed points of expressions. In mathematica the expressions and computations are tough to separate, so you can use them for both.