Yeah, it just takes some manpower. And it really is fundamental to autodiff, because dual numbers take up (N+1)*bitsize memory. So for example if you have a dual number of Float64s with 3 partials, that dual number is actually a 4 dimensional number of 64 bits each, which is why there’s no way this is ever going to work without caching a bit differently. But the trick for how to use dispatch + reinterpret is documented in the FAQ there, which essentially builds a cache for the two possible sizes, chooses between them, and fixes the tag so ForwardDiff doesn’t error.
@ChrisRackauckas now I got what you mean, Chris, thanks! I assume, as the ForwardDiff package only errors, when using solvers for stiff problems, that the non-stiff solvers are not using ForwardDiff? Are they still auto-differentiating by default or do they approximate by finite differences?
So yes, until DynamicalSystems.jl makes its caches compatible with ForwardDiff, it cannot use automatic differentiation for the stiff ODE solvers. That is fine since they all have a switch to use numerical differencing (Rodas5(autodiff=false)
). The non-stiff ODE solvers aren’t building Jacobian so they don’t use any differencing.
@ChrisRackauckas I played around with stiff solvers and autodiff=false
a bit and, compared to the non-stiff solvers in parameter regimes of my systems, where they work, they seem to be very slow. Providing a Jacobian by hand to avoid finite differences is not an option as they are simply too big.
I had a (short) look at ParametrizedFunctions
and ModellingToolkit
and I really like the idea of symbolically calculating Jacobians, etc. It would be really nice, and less error prone for the user, if one can simply define the actual system, say f
, via e.g. something like @ode_def
, and not only have the Jacobian Df
, but also the Jacobian of the coupled system hcat(f,Df)
automatically calculated, which then can be used to solve the coupled system without the need of auto-differentiating or finite differences
Is there a way to achieve this atm?
ModelingToolkit.jl is a symbolic system, so there is a way to get it once you’d symbolicicized to it. Explain what you’re looking to generate and maybe I can help you get there. Will you be at the JuliaCon workshop? We plan to go over this.
Also look into SparseDiffTools.jl
Automatic detection of sparsity + Jacobian coloring will get you pretty close to analytical on a large sparse system.
The whole idea of using the tangent_integrator
is to calculate Lyapunov exponents. Say we have a differential equation
d/dt x = f(x)
with Jacobian Df, then we need to solve the coupled system
d/dt x = f(x)
d/dt W = Df(x)*W
This is what create_tangent
does - it ties both differential equations together in one differential equation, say g, which then can be solved. The solver then might need to compute Dg. the Jacobian of g, for solving the for some initial values.
So if we have symbolically defined f, we can calculate the Jacobian Df of f, couple the system and get g and the symbolically calculate Dg. So we only need the define f and the rest could be done by the symbolic engine. This would save a lot of code writing as the dimension of Dg grows asymptotically to the power of 4 of the dimension of f, e.g. for the Lorenz system
dim(f) = 3
dim(Df) = 3x3 = 9
dim(g) = 3 + 9 = 12
dim(Dg) = 12x12 =144.
SparseDiffTools
would be very handy here, as the dimensions grow quickly.
I will not be at JuliaCon this year, unfortunately. I picked up Julia two months ago and am just scratching the surface right now
JuliaCon is going on, so remind me to do this, but a good Hackathon thing would be to implement the Jacobian function. Then you could literally write ModelingToolkit.jacobian(f)*W
and it would generate that equation (matrix multiplication already works). Right now, I’d just calculate the Jacobian using the primitive expend_derivative(ex)
, looping through and writing the derivative operator (which is how the Jacobian function would be created).
I will have a look at this! Thanks Chris for all your help!