DifferentialEquations extracting coefficients of dense interpolant

From the solution of a differential equation using Vern6, Vern7, Vern8 or Vern9, how can I extract the dense interpolant polynomial coefficients?

Of course I could evaluate the polynomial on a grid of size order+1 inside a step and then compute the interpolation polynomial using Lagrange’s method, but I do not want to evaluate the polynomial because the evaluation of that polynomial in Float64 arithmetics creates too much numerical noise for my application. I can solve this issue by either:

  1. using BigFloat numeric type (tested in Julia)
  2. evaluating the interpolation polynomial using compensated floating point arithmetics: https://www-pequan.lip6.fr/~jmc/polycopies/Compensation-horner.pdf (I tested this successfully in a C++ code with Verner’s RK877 dense interpolant computed in double precision)

I would like to try method 2 in Julia but for that I need access to the interpolant coefficients.

Thank you.

sol.k.

It would be easiest to just modify the code.

If it helps then I would definitely accept a PR to at least make it an option.

1 Like

Thank you @ChrisRackauckas .

It looks to me that I just need to change Base.@evalpoly since this macro is only used in the interpolant (I want to leave the propagator steps unchanged. I understand the interpolants are used for event finding but I do not have events in this case).

$ find .julia/packages/OrdinaryDiffEq/ -type f -exec grep -q "evalpoly" {} \; -print
.julia/packages/OrdinaryDiffEq/W2xe3/src/dense/interpolants.jl

However it seems even if I redefine Base.@evalpoly before loading DifferentialEquations this has no effect on the @evalpoly used in DifferentialEquations.jl.

function TwoProductFMA(a::Float64,b::Float64)
    x = a * b;
    y = fma( a, b, -x )
    return x,y
end

function TwoSum( a::Float64, b::Float64 )
    x = a + b
    z = x - a
    y = ( a - ( x - z ) ) + ( b - z )
    return x,y
end

function chorner( x::Float64, coeffs)
    n = length(coeffs)
    r, corr = coeffs[n], zero(x)
    for i in 1:n-1
        r, a = TwoProductFMA( r, x );
        r, b = TwoSum( r, coeffs[n - i] );
        corr = corr * x;
        corr += a + b;
    end
    return r+corr
end

import Base.@evalpoly
macro evalpoly(z, p...)
    zesc, pesc = esc(z), esc.(p)
    :(chorner($zesc, ($(pesc...),)))
end

Is there a way to redefine Base.@evalpoly globally? Is there some compile cache that has to be manually cleanep up?

Note: I found in the Math.jl source there is already a compensated polynom evaluation called Base.exthorner but I cannot use it and I do not understand why:

julia> Base.exthorner
ERROR: UndefVarError: exthorner not defined

It’s Base.Math.exthorner. Also it’s worth noting that it is only partially compensated in that it will get full accuracy when you have terms of decreasing magnitude (and possibly alternating signs) but will lose some precision if your polynomial is divergent. (This is because adding the other half of the compensation has a performance cost and exthorner is one I wrote for special function evaluation where the polynomials you are evaluating come from convergent series).

1 Like

Redefining the macro in Base does not work but specializing Base.evalpoly works.


function TwoProductFMA(a::Float64,b::Float64)
    x = a * b;
    y = fma( a, b, -x )
    return x,y
end

function TwoSum( a::Float64, b::Float64 )
    x = a + b
    z = x - a
    y = ( a - ( x - z ) ) + ( b - z )
    return x,y
end;

function chorner( x::Float64, coeffs)
    n = length(coeffs)
    r, corr = Float64(coeffs[n]), zero(x)
    for i in 1:n-1
        r, a = TwoProductFMA( r, x );
        r, b = TwoSum( r, Float64(coeffs[n - i]) );
        corr = corr * x;
        corr += a + b;
    end
    return r+corr
end

import Base.evalpoly
evalpoly(x::Float64, p::Tuple)=chorner(x,p)
#evalpoly(x::Float64, p::AbstractVector) = chorner(x,p)

It seems the right thing to do might be to have a separate macro and an option to flip the choice?