I have a non-linear model with about 16 equations. I linearize it with different inputs and outputs around different operating point. I use both the linearize() function and the linerize_symbolic() function. This works already much better than when using Matlab/Simulink. But I often get quite different results depending on my approach.

Question: Is it possible to increase the accuracy of the linearization by using a higher precision than Float64? If yes, how?

I didnâ€™t run any tests, but it seems to be that the linearization itself probably introduces the main approximation error, not the round-off effects.

Depending on your nonlinear equations, the linearization around x_0 introduces an error in the right-hand side which grows at the scale of \Vert Df(x_0)(x - x_0) - f(x) \Vert \leq C \Vert x - x_0 \Vert^2. Which can cause errors at the scale \frac{\varepsilon}{L} ( e^{L \Vert x - x_0 \Vert} - 1) where \varepsilon is the error in the RHS and L Lipschitz constant of the system (Hairer, Norsett, Wanner; Solving ODE I, Thm 10.2). As a result, the round-off error while computing the linearisation (which is maybe at most \varepsilon \approx 10^{-10}) is probably less relevant than the linearization error itself
(\varepsilon \approx \sup_x \Vert Df(x_0)(x - x_0) - f(x) \Vert \approx C \Vert x - x_0 \Vert^2 ).

Maybe you could try to get a quadratic approximation instead of a linear one?

The linearization uses ForwardDiff.jl, so itâ€™s already accurate to about machine epsilon. Unless your model is exceptionally poorly scaled numerically, Iâ€™d guess that there is something else going on.

What are the approaches and the results you are talking about, linearize vs linearize_symbolic? Do you get any warnings from the linearization function?

A way to check for scaling problems is to look at the gradient at the approximation point. If some element of the gradient is very close to zero, then there could be a scaling problem. Generally speaking, it is nice if the elements of the gradient are of roughly the same order of magnitude.

Different results at a given point, say x, are entirely expected when the approximations are done at different points, say, x0 and x1.

Automatic differentiation and symbolic differentiation should give fairly good accuracy in â€śmostâ€ť contexts. On mathematical models for computing Jacobians, it should be extremely close (people normally say â€świthin floating point errorâ€ť, but thatâ€™s not exactly true, but I digress). It may not be exactly true because different optimizations in the AD library, particularly SIMD, will change the summation order, but that should be like 2 ulp in most instances.