[ANN] FixedPoint.jl - Solver of equations using Fixed Point Method [with a few bonuses]

Hello, I’ve written a simple package to solve equations using accelerated fixed point method:

  • Allows for nesterov’s acceleration (more stable than Anderson acceleration)
  • Allows for relaxation (dampening of updates) using a learning rate

I’ve written this piece of code countless times (since fixed-point problems come up a lot in my research, and the various Anderson accelerated packages do not seem to work for my problems) thus after rewriting it more than 10 times I’ve figured it could have been a package :slight_smile:

I sincerely hope it is of use,
Thank you

15 Likes

Note that there are also the FixedPointAcceleration.jl and NLSolve.jl packages. How would you contrast your approach?

My understanding is that Nesterov acceleration is equivalent to a special case of Anderson acceleration (implemented in the above packages) with a window size of 1?

9 Likes

My package is very light, currently (besides Julia) it has zero dependencies.

Furthermore it can be operated through only 2 parameters: learning rate and acceleration.
So as a baseline I would say that it provides value by reducing complexity, both in the number of dependencies and in the number of control parameters.

Compared to NLsolve: I was unable to use their fixed point solver successfully, it often diverged when applied to the problems that interest me, the culprit is the fact that it does not offer relaxation.

Compared to FixedPointAcceleration: I’ve never tried it, but it looks very feature packed and complicated. Also seems to fail on my test problems.

By the way I was not aware of the connection between Nesterov and Anderson acceleration,very interesting! I prefer Nesterov acceleration since it is tunable while Anderson mixing is not.

If any of this info is incorrect I apologize!
Francesco

3 Likes

Unless I’m missing something I don’t think this is correct : the parameter in nesterov is dependent on the iteration number but not on the actual values of the iterates, and it’s the reverse with Anderson.

@francesco.alemanno what do you mean by no relaxation and no tunability? Doesn’t Anderson asks you for a beta parameter that does this?

1 Like

I mean this, acceleration is provided by linearly combining previous iterates in some way, while relaxation is provided by reducing the magnitude of the update.
In most Anderson Mixing implementations only the first of this options is provided by linearly combining iterates according to the minimiser \alpha of \lVert \alpha^T (x-f(x))\rVert^2_2 .

Indeed the selection of alpha is unconstrained (although there are ways to constrain it more or less strictly, eg with regularization), but the generation of the iterates does include a form of relaxation. Interesting that this is not enough for your problem.

1 Like

Good to see more fixed point packages. Increases the odds that people can find one that works for their problem.

One thing about FixedPointAcceleration.jl, You can do “relaxation” by changing the Dampening parameter like below:

using FixedPointAcceleration
func(x) = [0.5*sqrt(abs(x[1] + x[2])), 1.5*x[1] + 0.5*x[2]]
Inputs = [0.3,900.0]
fp_anderson = fixed_point(func, Inputs; Algorithm = :Anderson)
println("This takes ", fp_anderson.Iterations_, " iterations to converge")
fp_anderson2 = fixed_point(cos_func, Inputs; Algorithm = :Anderson, Dampening = 0.4)
println("This takes ", fp_anderson2.Iterations_, " iterations to converge")

Also I would probably not register it as FixedPoint.jl. There used to be a package by that name I am pretty sure (as I tried to take it) but they must have renamed it (maybe it is now FixedPointNumbers.jl). So it might be confusing if you take the name.

1 Like

Anderson acceleration is also part of

Ah okay, my package is a quite a bit more lightweight, and geared towards ill-conditioned problems, as an example:


using FixedPointAcceleration, FixedPoint

func(x) = [1+tanh((1-x[1])*1000)]
Inputs = [10.0]
a = FixedPointAcceleration.fixed_point(func, Inputs; Algorithm = :Anderson,Dampening=0.4)
# a = did not converge, I also tried :Aitken, :Newton and :Simple algorithm, nothing seems to work out...

# while my package works ootb
b = FixedPoint.afps(func, Inputs, ep=0.0005,vel=0.96)
# b = (x = [1.0000000000000004], error = 4.445332990599127e-13, iters = 347)

performance tests:


using BenchmarkTools

@time using FixedPointAcceleration
#  0.530836 seconds (1.87 M allocations: 117.636 MiB, 1.58% gc time, 9.45% compilation time)

func(x) = [0.5*sqrt(abs(x[1] + x[2])), 1.5*x[1] + 0.5*x[2]]
Inputs = [0.3,900.0]
@benchmark fixed_point($func, Inputs; Algorithm = :Anderson)
#=
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (min … max):  53.458 μs …   4.539 ms  ┊ GC (min … max): 0.00% … 97.13%
 Time  (median):     55.750 μs               ┊ GC (median):    0.00%
 Time  (mean ± σ):   63.202 μs ± 159.641 μs  ┊ GC (mean ± σ):  9.93% ±  3.88%
 =#
@benchmark fixed_point($func, Inputs; Algorithm = :Anderson, Dampening = 0.4)
#=
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (min … max):  317.292 μs …   5.559 ms  ┊ GC (min … max):  0.00% … 91.14%
 Time  (median):     326.709 μs               ┊ GC (median):     0.00%
 Time  (mean ± σ):   371.684 μs ± 389.399 μs  ┊ GC (mean ± σ):  10.79% ±  9.38%
=#

@time using FixedPoint
#  0.000480 seconds (960 allocations: 92.016 KiB)

@benchmark afps($func, Inputs, ep=1.0,vel=0.5)
#=
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (min … max):  10.833 μs …  2.616 ms  ┊ GC (min … max):  0.00% … 98.62%
 Time  (median):     11.417 μs              ┊ GC (median):     0.00%
 Time  (mean ± σ):   13.309 μs ± 62.082 μs  ┊ GC (mean ± σ):  11.31% ±  2.41%
=#

my package is also more type generic, for example it gets even faster by leveraging StaticVectors


using BenchmarkTools, StaticArrays

@time using FixedPointAcceleration
#  0.530836 seconds (1.87 M allocations: 117.636 MiB, 1.58% gc time, 9.45% compilation time)

func(x) = @SVector [0.5*sqrt(abs(x[1] + x[2])), 1.5*x[1] + 0.5*x[2]]
Inputs = @SVector [0.3,900.0]
fixed_point(func, Inputs; Algorithm = :Anderson)
#=
ERROR: MethodError: no method matching fixed_point(::typeof(func), ::SVector{2, Float64}; Algorithm=:Anderson)
Closest candidates are:
=#

@benchmark afps($func, Inputs, ep=1.0,vel=0.5)
#=
BenchmarkTools.Trial: 10000 samples with 81 evaluations.
 Range (min … max):  823.556 ns …  1.468 μs  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     826.136 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   839.262 ns ± 31.526 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%
=#

my package also allows tensor (or matrix or scalar) inputs and functions.

3 Likes