DifferentialEquations extracting coefficients of dense interpolant

From the solution of a differential equation using Vern6, Vern7, Vern8 or Vern9, how can I extract the dense interpolant polynomial coefficients?

Of course I could evaluate the polynomial on a grid of size order+1 inside a step and then compute the interpolation polynomial using Lagrange’s method, but I do not want to evaluate the polynomial because the evaluation of that polynomial in Float64 arithmetics creates too much numerical noise for my application. I can solve this issue by either:

  1. using BigFloat numeric type (tested in Julia)
  2. evaluating the interpolation polynomial using compensated floating point arithmetics: https://www-pequan.lip6.fr/~jmc/polycopies/Compensation-horner.pdf (I tested this successfully in a C++ code with Verner’s RK877 dense interpolant computed in double precision)

I would like to try method 2 in Julia but for that I need access to the interpolant coefficients.

Thank you.

sol.k.

It would be easiest to just modify the code.

If it helps then I would definitely accept a PR to at least make it an option.

1 Like

Thank you @ChrisRackauckas .

It looks to me that I just need to change Base.@evalpoly since this macro is only used in the interpolant (I want to leave the propagator steps unchanged. I understand the interpolants are used for event finding but I do not have events in this case).

$ find .julia/packages/OrdinaryDiffEq/ -type f -exec grep -q "evalpoly" {} \; -print
.julia/packages/OrdinaryDiffEq/W2xe3/src/dense/interpolants.jl

However it seems even if I redefine Base.@evalpoly before loading DifferentialEquations this has no effect on the @evalpoly used in DifferentialEquations.jl.

function TwoProductFMA(a::Float64,b::Float64)
    x = a * b;
    y = fma( a, b, -x )
    return x,y
end

function TwoSum( a::Float64, b::Float64 )
    x = a + b
    z = x - a
    y = ( a - ( x - z ) ) + ( b - z )
    return x,y
end

function chorner( x::Float64, coeffs)
    n = length(coeffs)
    r, corr = coeffs[n], zero(x)
    for i in 1:n-1
        r, a = TwoProductFMA( r, x );
        r, b = TwoSum( r, coeffs[n - i] );
        corr = corr * x;
        corr += a + b;
    end
    return r+corr
end

import Base.@evalpoly
macro evalpoly(z, p...)
    zesc, pesc = esc(z), esc.(p)
    :(chorner($zesc, ($(pesc...),)))
end

Is there a way to redefine Base.@evalpoly globally? Is there some compile cache that has to be manually cleanep up?

Note: I found in the Math.jl source there is already a compensated polynom evaluation called Base.exthorner but I cannot use it and I do not understand why:

julia> Base.exthorner
ERROR: UndefVarError: exthorner not defined

It’s Base.Math.exthorner. Also it’s worth noting that it is only partially compensated in that it will get full accuracy when you have terms of decreasing magnitude (and possibly alternating signs) but will lose some precision if your polynomial is divergent. (This is because adding the other half of the compensation has a performance cost and exthorner is one I wrote for special function evaluation where the polynomials you are evaluating come from convergent series).

1 Like

Redefining the macro in Base does not work but specializing Base.evalpoly works.


function TwoProductFMA(a::Float64,b::Float64)
    x = a * b;
    y = fma( a, b, -x )
    return x,y
end

function TwoSum( a::Float64, b::Float64 )
    x = a + b
    z = x - a
    y = ( a - ( x - z ) ) + ( b - z )
    return x,y
end;

function chorner( x::Float64, coeffs)
    n = length(coeffs)
    r, corr = Float64(coeffs[n]), zero(x)
    for i in 1:n-1
        r, a = TwoProductFMA( r, x );
        r, b = TwoSum( r, Float64(coeffs[n - i]) );
        corr = corr * x;
        corr += a + b;
    end
    return r+corr
end

import Base.evalpoly
evalpoly(x::Float64, p::Tuple)=chorner(x,p)
#evalpoly(x::Float64, p::AbstractVector) = chorner(x,p)

It seems the right thing to do might be to have a separate macro and an option to flip the choice?

1 Like

If you are worried about numerical accuracy, you usually don’t want to use Lagrange’s formula, which is numerically unstable. Use e.g. barycentric interpolation, and where possible you want to avoid computing coefficients of polynomials in a monomial \{1,x,x^2,\ldots\} basis.

What are you trying to do?

1 Like

It seems the right thing to do might be to have a separate macro and an option to flip the choice?

That would certainly be nice. For my use case redefining Base.evalPoly worked.

@stevengj I do not want to use Lagrange Interpolation. I was just saying that I could obtain the polynomial coefficients using a Lagrange interpolator if I could evaluate the ODE solution interpolator to any accuracy (eg with BigFloat). But ideally I just want to get the interpolator coefficients. My goal was to assess different formulas to evaluate the interpolation polynomial (compensated arithmetic) to get rid of numerical noise in my model. The noise in my model is amplified by cancellation because the main term in the model is almost of the form of a finite difference:

\frac{f(u(t+T))-f(u(t-T))}{2T}

where u(t) is the ODE solution.

Ideally we get rid of the numerical noise by using a Taylor serie in T
and that works well but requires accurate derivatives (automatic or analytic) for f \circ u and that is required in some case.

But already reducing the noise in the ODE solution interpolator polynomial evaluation helps a lot.

The interpolations have the derivative forms written out as well with sol(t,Val{1})

1 Like