Hello,
I have the following ODE system
\dot{\mathbf{u}} = \mathbf{A} \ \mathbf{u}
where \mathbf{u} is a vector and \mathbf{A} is a Diagonal
matrix. The following ODE system admits an analytical solution of the form
\mathbf{u} (t) = \exp [\mathrm{diag}(\mathbf{A}) \ t ] \cdot \mathbf{u}_0
so I wouldn’t need DifferentialEquations.jl to solve it. However, I would take advantage of its OrdinaryDiffEqCallbacks.jl framework to implement for example a ContinuousCallback
with interpolation.
I was wondering if there exists a method to implement this. I tried the LinearExponential
algorithm, but this is an overkill for my problem, because I don’t need to construct the Krylov subspace, since I already have the analytical solution. I was thinking of creating a custom solver like DiagonalExponential
and defining only the methods needed from the integrator
interface, but I don’t know how to proceed exactly.
Since you have the functional form of the solution, is there any reason you wouldn’t use NonlinearSolve.jl directly with your callback? (Or SimpleNonlinearSolve.jl?)
How exactly? I want to trigger a callback when the norm of the vector \mathbf{u} (t) is smaller than a random number r, then I basically change the value of \mathbf{u} and I keep evolving the state analytically until the next jump.
Something like
using SimpleNonlinearSolve
using LinearAlgebra: norm
u0 = [1.0, 2.0, 3.0] # ICs
λ = [-1.0, -1.5, -2.0] # Eigenvalues / diagonals
t = 0.0 # Start time
r = 0.5 # Required norm
f(t, p) = norm(exp.(p.λ .* t) .* p.u0) - p.r
prob = NonlinearProblem{false}(f, t, (; λ, u0, r))
sol = solve(prob, SimpleNewtonRaphson())
if SimpleNonlinearSolve.SciMLBase.successful_retcode(sol)
t = sol.u
println("Converged to time: $t")
# Do callback
else
println("Failed to converge")
end
1 Like
Thank you very much for this example. It seems in the direction I want.
Then I only need to repeat this procedure up to a predefined time t_f, right?
Do you think that auto differentiation using a loss function similar to your f^2 (t, p) would be more efficient? In principle it should take advantage of auto differentiation, finding the minimum of the function.
Yes, just iterate until you hit the required time.
I would be very surprised if formulating it as a loss function would be helpful. Generally speaking if the problem is a root finding problem (this is), using an optimiser on it will be very inefficient. The solver here will already be using auto diff to calculate the Jacobian. (I’m guessing that AutoForwardDiff is the default.)
The only way to get more performance out of something like this is to use StaticArrays (if the problem is reasonably small) or change the function to be in place rather than allocating.
1 Like
You can pass krylov = :off
, but you don’t have to—it’s the default. Adapting the example from the docs to a diagonal operator:
julia> using LinearAlgebra, OrdinaryDiffEq, SciMLOperators
julia> A = MatrixOperator(Diagonal([-0.2, -0.35]))
DiagonalOperator(2 × 2)
julia> prob = ODEProblem(A, [1.0, -1.0], (1.0, 6.0));
julia> sol = solve(prob, LinearExponential())
retcode: Success
Interpolation: 3rd order Hermite
t: 2-element Vector{Float64}:
1.0
6.0
u: 2-element Vector{Vector{Float64}}:
[1.0, -1.0]
[0.36787944117144217, -0.17377394345044517]
Add your CotinuousCallback and you’re good to go.
But of course, @dawbarton’s solution works just as well and is arguably more transparent while having fewer and smaller dependencies.
1 Like
Oh, thats true. In this way we directly apply the exp
to a Diagonal
matrix. Is this action non-allocating? I mean does it allocates anytime exp(A) u
, or does it compute mul!(du, exp(A), u)
?