Accelerate computation of derivatives of vector field?

Hi everyone!

I am trying to compute Lie brackets of two vector fields let us say F0 and F1.
They’re in dimension 6 and I need brackets until length 5 (thus, derivative up to order 5). Unfortunately this is extremely slow to compute at one point, and I need to evaluate it on the solution F0 which would take my whole life to compute at this rate.

Here is my code, if someone can tell me if I am doing something wrong that would be awesome:

using ForwardDiff, DifferentialEquations, LinearAlgebra, Plots, FiniteDifferences

Di = 1.
De = Di
ni = -5. 
ne = ni                                                                        # parameters
ti = 10.
te = ti
tsi = 1.
tse = 1.
Jei = 15.
Jie = Jei
Ii = 0.
Ie = 10.


args = [ Di, De, ni, ne, ti, te, Ii, Ie, tsi, tse, Ji, Je ]


g0 = [ 0.04629790429840539
  1.611468284549182
  0.12483580053806043
  0.008700946777282085
 -1.5185752607786998
  0.5137968475438063 ]      #initialization

F0(x) = [ De/(pi*te^2) + (2/te)*x[1]*x[2];                          #Vector Field
               1/te*(x[2]^2 + ne - (te*pi*x[1])^2 - te*x[3] + Ie);
               1/tse*(-x[3] + Jei*x[4]);
               Di/(pi*ti^2) + (2/ti)*x[4]*x[5];                          
               1/ti*(x[5]^2 + ni - (ti*pi*x[4])^2 + ti*x[6] + Ii);
               1/tsi*(-x[6] + Jie*x[1]) ]


F1(x) = [ 0; 1; 0; 0; 1; 0 ]                               #Vector field supporting the control

DF0(x)=FiniteDifferences.jacobian(central_fdm(5,1), F0, x)[1]
F01(x) = DF0(x)*F1(x)                                                #1st bracket

DF01(x)=FiniteDifferences.jacobian(central_fdm(5,1), F01, x)[1]
F001(x) = DF0(x)*F01(x) - DF01(x)*F0(x)                  #2nd bracket

DF001(x)=FiniteDifferences.jacobian(central_fdm(5,1), F001, x)[1]
F0001(x) = DF0(x)*F001(x) - DF001(x)*F0(x)             #3rd bracket
 
DF0001(x)=FiniteDifferences.jacobian(central_fdm(5,1), F0001, x)[1]
F00001(x) = DF0(x)*F0001(x) - DF0001(x)*F0(x)             #4th bracket

DF00001(x)=FiniteDifferences.jacobian(central_fdm(5,1), F00001, x)[1]
F000001(x) = DF0(x)*F00001(x) - DF00001(x)*F0(x)       #5th bracket





detF(x)=det([ F1(x) F01(x) F001(x) F0001(x) F00001(x) F000001(x) ])     #determinant

My function detF which computes the discriminant takes ages to compute:

x = rand(6)
@time detF(x)

I get : 6628.461385 seconds (140.90 G allocations: 3.975 TiB, 9.67% gc time, 0.10% compilation time)

6.126137843264654e8

From what I understand, using anonymous functions to save storage could accelerate the process? Any suggestion?

Thank you!

PS: if anyone’s curious, the dynamics is a meanfield model for populations of neurons, I’m trying to control them around their periodic orbit

I would recommend you using StaticArrays and then define all your vectors using type SVector – for example, F0(x, args) = @SVector [ De/(pi*te^2) + (2/te)*x[1]*x[2]; ... and x = @SVector rand(6). StaticArrays are fantastic for small arrays (<100 elements, as a rough ballpark) because they unroll operations and avoid allocations and garbage collection (it’s for these same reasons that they are bad for large arrays).

Assuming ForwardDiff can handle your problem, I would expect it to be more accurate and much faster than FiniteDifferences for computing jacobians.

I believe, if you look into the documentation further (at least for ForwardDiff – I haven’t personally used FiniteDifferences), it is possible to use the DiffResults package to set up derivative calls to compute multiple derivative orders at once. That could provide a substantial savings over what you’ve implemented, where you define and compute derivatives recursively.

The tensors for 5th-order jacobians would likely be too large for StaticArrays, so I would avoid trying to use them for now. You might be able to use SVector for some intermediate values (like the input and output from your actual function you’re differentiating), but I would start with trying to get the derivatives to be more efficient.

Yes thank you! I tried with Static arrays and it gave pretty good result for order 3 and 4 but at the 5th order it was even longer than before (actually after 4h computation wasn’t finished). I’ll take a look to DiffResults! The problem is I don’t actually take high order derivative of the same function but of the product with some other function. Anyway there might be a way in the doc, I’ll investigate.

Also for now, they were not so much difference between autodiff and finite differences (tho ForwardDiff was slightly faster, wasn’t so significant)

Edit: Actually, it is slightly faster with FiniteDifferences

1 Like

Yes, I now better-see the structure that’s happening here. Your F0...1 functions slightly resemble applications of the chain rule. Is there some algebraic simplification that can be made? I’m assuming not but figured I’d remark. Perhaps there isn’t anything clever to be done regarding the differentiation itself.

It looks as if you might be calling the same functions with repeated arguments many times? If that’s the case, there are several packages for performing memoization (I can’t speak to the comparative advantages of each, so won’t recommend one specifically). You could also restructure your code to simply save these results rather than call them repeatedly, but that might be less flexible and extensible.

You might at least be able to cut down some of the allocation/gc costs with a strategic use of StaticArrays in only some positions (maybe just for x and F0? Maybe force the return value of one of the higher order functions into a normal Array by wrapping the result in a call to Matrix(...) – assuming that the issue was that SMatrix tries to propagate to too-high orders. But I’m not certain this will provide much benefit.

In any case, I’m just about out of ideas. Perhaps someone else will have something clever to offer.

You should declare your constants const. This will both improve stability and hopefully do constant propagation in F0 (so that e.g. De/(pi*te^2) is replaced with a number).

If your try ForwardDiff, with e.g. DF001(x) = ForwardDiff.jacobian(F001, x), it will typically spend much of its time in compilation (i.e. first time use). This can possibly be improved by running with a low optimization level (like -O1).

2 Likes

Thanks for the propositions guys! I’m trying some of them with slight improvement but nothing really significant (const and SVector). By low optimization level I am not sure what you mean, choosing a better chunk size?

Edit: Warning. It looks like Memoize uses IdDict as keys, which may cause problems if the code is internally mutating some arrays.

It seems like your expressions are calculating the same things over and over and over. Memoization can help you there:

using FiniteDifferences
using Memoize

const Di = 1.0
const De = Di
const ni = -5.0 
const ne = ni                                                                        # parameters
const ti = 10.0
const te = ti
const tsi = 1.0
const tse = 1.0
const Jei = 15.0
const Jie = Jei
const Ii = 0.0
const Ie = 10.0


@memoize F0(x) = [ De/(pi*te^2) + (2/te)*x[1]*x[2];                          #Vector Field
          1/te*(x[2]^2 + ne - (te*pi*x[1])^2 - te*x[3] + Ie);
          1/tse*(-x[3] + Jei*x[4]);
          Di/(pi*ti^2) + (2/ti)*x[4]*x[5];                          
          1/ti*(x[5]^2 + ni - (ti*pi*x[4])^2 + ti*x[6] + Ii);
          1/tsi*(-x[6] + Jie*x[1]) ]

@memoize mycentral(m, n) = central_fdm(m, n)

@memoize F1(x) = [ 0; 1; 0; 0; 1; 0 ]                               #Vector field supporting the control

@memoize DF0(x)=FiniteDifferences.jacobian(mycentral(5, 1), F0, x)[1]
@memoize F01(x) = DF0(x)*F1(x)                                                #1st bracket

@memoize DF01(x)=FiniteDifferences.jacobian(mycentral(5, 1), F01, x)[1]
@memoize F001(x) = DF0(x)*F01(x) - DF01(x)*F0(x)                  #2nd bracket

@memoize DF001(x)=FiniteDifferences.jacobian(mycentral(5, 1), F001, x)[1]
@memoize F0001(x) = DF0(x)*F001(x) - DF001(x)*F0(x)             #3rd bracket
 
@memoize DF0001(x)=FiniteDifferences.jacobian(mycentral(5, 1), F0001, x)[1]
@memoize F00001(x) = DF0(x)*F0001(x) - DF0001(x)*F0(x)             #4th bracket

@memoize DF00001(x)=FiniteDifferences.jacobian(mycentral(5, 1), F00001, x)[1]
@memoize F000001(x) = DF0(x)*F00001(x) - DF00001(x)*F0(x)       #5th bracket


detF(x)=det([ F1(x) F01(x) F001(x) F0001(x) F00001(x) F000001(x) ])     #determinant

Now, you have to be careful with benchmarking, since the second time you call with a particular value, the answer is immediate:

julia> x = rand(6);

julia> @time detF(x)
  0.000778 seconds (3.86 k allocations: 344.203 KiB)
0.0

julia> @btime detF(x) setup=(x=rand(6)) evals=1
  298.200 μs (3790 allocations: 329.22 KiB)
0.0

The answer is always 0.0. Does that make sense?

Thanks for the advise, I’ll give it a try! If it is indeed identically 0 it would mean I probably need to compute the next bracket, but it is also possible that it is non zero only on very particular points which wouldn’t be picked randomly.

About your edit, ForwardDiff does mutate arrays right?

There are some issues with the memoization and mutation of arrays. You definitely need to check for correctness, because, currently, I’m skeptical.

Hmm yes there is something wrong, all the bracket are 0, when there are non zero with my previous code, let me check.

Edit: Memoize and Finitedifferences don’t seem to work together? Cause everything works if I do it with ForwardDiff. It is still very slow though.

Hello sgaure! I’m coming back to you because after having looked a bit around I’m still not sure what you meant with “low optimization level”, this could be useful for me so it would be great if you could give me more details :slight_smile: Thanks!

It’s in the Experimental module:


help?> Base.Experimental.@optlevel
  Experimental.@optlevel n::Int

  Set the optimization level (equivalent to the -O command line argument)
  for code in the current module. Submodules inherit the setting of their
  parent module.

  Supported values are 0, 1, 2, and 3.

  The effective optimization level is the minimum of that specified on the
  command line and in per-module settings. If a --min-optlevel value is set
  on the command line, that is enforced as a lower bound.

1 Like