Using TaylorSeries.jl for functions with many variables (~1000)

I have an application where I need all possible 3rd and 4th order derivatives of a function w.r.t. that function’s inputs. TaylorSeries.jl works brilliantly for this in my toy model; however, my actual use case requires the Taylor series to be expanded in ~1000 variables which seems to be too many for the library to handle. Even just initializing the TaylorSeries with:

t_series_vars = set_variables(“u”, order = 3, numvars = 3*n_atoms)

does not execute quickly. Let alone trying to calculate the derivatives.

Is there some way within TaylorSeries.jl to handle this? Some kind of lazy loading? Or is there another library that can do what I need? Most other threads I’ve read suggest TaylorSeries for high order derivatives. I’m getting the feeling I will have to manually chain calls to Zygote or ForwardDiff inside a big loop to do this.

Any help ideas would be greatly appreciated!

The 4th derivatives require ~ 1000^4 terms. Do you have terabytes of memory? You probably need to re-think your needs.

Yeah I know, that’s why I was hoping there was a way to load them on demand. I also am able to ignore certain combinations of the derivatives. So instead of TaylorSeries loading all combinations of derivatives I would just like to use it as a tool to calculate higher order derivatives on demand.

And I do have a lot of memory this will ultimately run on a computing cluster.

Maybe it’s relevant to be aware of the Curse of dimensionality - Wikipedia. If you try to use these higher order derivatives for some higher order optimisation scheme or intergrator, then it could be that those have very different benefit/cost ratios in large dimensions.

If you really need higher order derivatives. The first attempt would maybe be to figure out the sparsity? (A bit a shot into the dark, maybe you could use https://en.wikipedia.org/wiki/Higher-order_singular_value_decomposition.) But again, probably you should avoid higher order derivatives in the first place!

Yeah it’s an atomistic simulation so I’m kind of doomed to working with high dimensional data (SVD would remove most physical meaning for my specific question). As for avoiding higher order derivatives, third order is the first correction where something interesting happens. At this point I do not think TaylorSeries.jl will work, but is there any reason I should NOT use Zygote or similar libraries to get 3rd and 4th derivatives?

This is a very common calculation in lattice dynamics that people usually use finite differences for with well known symmetries and I was trying to avoid using finite difference in favor of an exact approach. The finite differences can be finicky to get good results with and auto-diff would avoid all that.

Here’s a link, the section on Lattice dynamics and interatomic force constants is what I’m doing:

I suspect you will want forward mode autodiff here (preferably using something that implements taylor mode autodiff for extra efficiency for the higher order derivatives).

1 Like

I think for the method TaylorSeries.jl uses to work you have to expand the function as a Taylor series. I could be very much mis-understanding how that method works though.

There are a couple dead repos like HigherOrderDerivatives.jl and Diffractor.jl. It looks to me that TaylorSeries.jl is the only big lib that does higher order derivatives without quadratic scaling.

Use TaylorDiff.jl instead for higher order AD.

It will give some performance improvements.

That said, as Steven points out, this may just be beyond what’s reasonable. But at least try the fast one first.

2 Likes

No, TaylorSeries.jl has allocating number types so it will not achieve the speed of TaylorDiff.jl and not scale all that well. See the benchmarks on this.

Cool, thanks! Will try that out.

Looks like TaylorDiff only supports directional derivatives. Maybe I need to brush up on my vector calculus, but I cannot get the derivative at a point from this interface. Am I missing something obvious, there’s not a ton of documentation for this library?

Say I want \frac{d^2f}{dxdy} evalulated at (0,0). In TaylorSeries.jl this is pretty easy to do, but I don’t see a straightforward way to do that in TaylorDiff.jl. I see how to get \frac{d^2f}{dx^2} but not mixed derivatives.

The point is you only want to compute directional derivatives because the full derivative will use terabytes of memory. (also finite differencing only gives you directional derivatives )

1 Like

Seems doomed then lol. Thanks everyone.

Is this a limitation of current AD methods or is it just something that will never happen? Seems to me there’s lots of applications where higher order derivatives are important.

I’m not sure you are. What do you actually need the full derivative for? You can very often use directional derivatives to get the information you need. The limitation is fundamental. The total 4th derivative requires 1000^4 elements to be stored which in Float64 precision is 8 terabytes.

My application is basically higher order corrections to Hooke’s law. If you expand the potential energy of a mass about its equilibrium position the second derivative evaluated at equilibrium will be the “k” in Hooke’s law. If you keep adding terms to that Taylor Series you will get higher order corrections. I do not really see a way to write that in terms of directional derivatives.

The derivatives I need are the coefficients of a monster TaylorSeries which I pasted the full form of below. u^\alpha_i is the displacement of atom i in the direction \alpha. I’m after the \Psi coefficients which are third order corrections.


imageimage|300, 50%

I could put the function for U into TaylorSeries.jl but as we discussed that is very much intractable. It does kind of look like directional derivatives I guess, might be able to get \frac{d^2U}{du_idu_j} in direction of alpha,beta,gamma.

Yeah, you can do this with directional derivatives. Just take the derivatives wrt each unit vector and add the terms and you’ll get the right result without the memory blowup.

1 Like

Thanks for all the help everyone!

Why do you need to analyze the higher-order corrections via polynomial expansion? Why not just work with the nonlinear potential function directly? What is the end result you are trying to compute?

1 Like

I’m trying to compute the heat capacity \frac{d<U>}{dT} but in terms of the vibrational modes of the system. Basically every mode shape should contribute its own component to heat capacity and there’s no way to get that decomposition from the raw potential (at least not that anyone’s done before).

I have used auto-diff to calculate \frac{d<U>}{dT} explicitly from the raw potential and it does work. It just doesn’t have all the information we’re after.

If I could somehow re-write the equation below in terms of modal coordinates I could do what you’re saying and get the information I’m after. Here U_{ij} is the pair potential between two atoms that you’re referring to.
U_{total} = 0.5\sum_i\sum_{j\neq i}U_{ij}

It seems like this is a good research problem if it hasn’t been done.

My thermodynamics is a bit rusty, but as I understand it what you need to compute the quantum-corrected heat capacity is the frequency of each mode \omega_n(E) as a function of the energy E in that mode, right? At low energies (low temperatures), the frequency is simply given by that of the harmonic-oscillator (quadratic/Hooke) potential, but at higher energies there are anharmonic corrections. Why not compute these corrections directly by doing numerical continuation of \omega_n(E) applied directly to the periodic boundary value ODE for each mode n (e.g. as in this example)? This kind of approach should be efficient since you have the harmonic modes as a starting point for root-finding/continuation, should parallelize well (over n), should scale better than trying to calculate the n^4 quartic Taylor terms, and gives you the corrections to all orders (not just quartic).

3 Likes