The following example comes up in the Taylor method for integrating differential equations, but the question keeps coming up over and over again.
Suppose I have something like the following, which is over-simplified, in particular with the wrong formula for *
for simplicity, but contains the kernel of the question:
struct Taylor
coeffs::Vector{Float64}
end
add!(z::Taylor, x::Taylor, y::Taylor, n) = z.coeffs[n] = x.coeffs[n] + y.coeffs[n]
mul!(z::Taylor, x::Taylor, y::Taylor, n) = z.coeffs[n] = x.coeffs[n] * y.coeffs[n]
x = Taylor(zeros(5))
y = Taylor(zeros(5))
Now if a user specifies a polynomial function like
f(x, y) = x*x + y
,
we need to transform it into code that produces finally a new Taylor
object, but in order to do so, it must calculate the coefficients of the new object element by element.
Currently in TaylorIntegration.jl
, there is a macro @taylorize
that does this by direct and painful code generation.
My question is if there is a better method, e.g. using operator overloading or creating an Intermediate Representation, e.g. using ModelingToolkit.jl, and doing something clever with that?
z1 = copy(x)
z2 = copy(y)
for n in 1:length(coeffs.x)
mul!(z1, x, x, n)
add!(z2, z1, y, n)
end
We really do need to do this element by element because of how the Taylor method works. Sowe need some way of persistently recording which intermediate objects get created along the way and modifying those later.
I have a rather different approach outlined in https://github.com/dpsanders/LazyTaylorSeries.jl, but I think it would be very useful to see any suggestions that people can make about this kind of coding pattern.
cc @lbenet