LinearAlgebra and TaylorSeries not interacting as expected

I just found out that some bizarre behaviour I was getting when using TaylorIntegration was due to my use of LinearAlgebra.norm in my equation of motion… Apparently, LinearAlgebra considers a Taylor1 (or TaylorN) object as a list, and doesn’t apply as expected… Here is a minimal working example with the result I thought I would get (with mynorm) and the actual output:

using TaylorSeries, LinearAlgebra
mynorm(vec) = sqrt(sum(transpose(vec) * vec))
v = [1., 0.] .+ Taylor1(1)
print(norm(v))   # 1.7320508075688774
print(mynorm(v)) # 1.0 + 1.0 t + 𝒪(t²)

Does someone happen to know why this is the case?

It’s not the LinearAlgebra package. The TaylorSeries package defines the norm of a Taylor1 object to be the norm of its coefficients.

See the discussion linked here:

Damn it, I didn’t see that TaylorSeries also defined norm… This explains a lot. The way this is defined still seems strange to me, but at least its origin makes sense now. Thanks a lot !