Everything should always be measured in work-precision. How much work did you put in, what accuracy are you expecting, and what was your query? I find a lot of people who say agents don’t work are also just trying the “AI plz fix” kinds of queries.
I shared some of the queries I use: What Agentic AI "Vibe Coding" In The Hands Of Actual Programmers / Engineers - Stochastic Lifestyle
For example, OrdinaryDiffEq.jl’s FBDF:
OrdinaryDiffEq.jl's FBDF and QNDF currently uses the Hermite
interpolation fallback for its dense output / interpolation.
However, these have a well-defined interpolation on their k
values that should be used. For example, FBDF has the Legrange
interpolation already defined and used in its nonlinear solver
initial point
https://github.com/SciML/OrdinaryDiffEq.jl/blob/4004fc75dff0
9855bb96333f02d4ce0bb0f8c57c/lib/OrdinaryDiffEqBDF/src/
dae_perform_step.jl#L418.
This should be used for its dense output. While QNDF has it
defined here:
https://github.com/SciML/OrdinaryDiffEq.jl/blob/4004fc75dff0
9855bb96333f02d4ce0bb0f8c57c/lib/OrdinaryDiffEqBDF/src/
bdf_perform_step.jl#L935-L939 .
If you look at other stiff ODE solvers that have a specially
defined interpolation like the Rosenbrock methods, you see an
interpolations file
https://github.com/SciML/OrdinaryDiffEq.jl/blob/4004fc75dff0
9855bb96333f02d4ce0bb0f8c57c/lib/OrdinaryDiffEqRosenbrock/
src/rosenbrock_interpolants.jl
with a summary
https://github.com/SciML/OrdinaryDiffEq.jl/blob/4004fc75dff0
9855bb96333f02d4ce0bb0f8c57c/lib/OrdinaryDiffEqRosenbrock/
src/interp_func.jl
that overrides the interpolation. Importantly too though, the
post-solution interpolation saves the integrator.k which are
the values used for the interpolation
https://github.com/SciML/OrdinaryDiffEq.jl/blob/4004fc75dff0
9855bb96333f02d4ce0bb0f8c57c/lib/OrdinaryDiffEqRosenbrock/
src/rosenbrock_perform_step.jl#L1535.
If I understand correctly, this is already k in FBDF but in
QNDF this is currently the values named D. The tests for
custom interpolations are
https://github.com/SciML/OrdinaryDiffEq.jl/blob/4004fc75dff0
9855bb96333f02d4ce0bb0f8c57c/test/regression/
ode_dense_tests.jl
Search around for any more Rosenbrock interpolation tests as
well. This should make it so that savevalues! always uses the
interpolation
https://github.com/SciML/OrdinaryDiffEq.jl/blob/4004fc75dff0
9855bb96333f02d4ce0bb0f8c57c/lib/OrdinaryDiffEqCore/src/
integrators/integrator_utils.jl#L122
while if dense=true (i.e. normally when saveat is not
specified) the interpolation is then done on sol(t) by using
the saved (sol.u[i], sol.t[i], sol.k[i]).
and one for SciMLSensitivty:
The SciMLSensitivity.jl callback differentiation code has an
issue with the design. It uses the same vjp calls to
`_vecjacobian!` but its arguments are not the same. You can
see this here
https://github.com/SciML/SciMLSensitivity.jl/blob/master/
src/callback_tracking.jl#L384-L394
where the normal argument order is
(dλ, y, λ, p, t, S, isautojacvec, dgrad, dy, W)
but in the callback one it's putting p second. This is
breaking to some of the deeper changes to the code, since
for example Enzyme often wants to do something sophisticated
https://github.com/SciML/SciMLSensitivity.jl/blob/master/
src/derivative_wrappers.jl#L731-L756
but this fails for if y is now supposed to be a p-like
object. This is seen as the core issue in 4 open PRs
(https://github.com/SciML/SciMLSensitivity.jl/pull/1335,
https://github.com/SciML/SciMLSensitivity.jl/pull/1292,
https://github.com/SciML/SciMLSensitivity.jl/pull/1260,
https://github.com/SciML/SciMLSensitivity.jl/pull/1223)
where these all want to improve the ability for p to not be
a vector (i.e. using the SciMLStructures.jl interface
https://docs.sciml.ai/SciMLStructures/stable/interface/ and
https://docs.sciml.ai/SciMLStructures/stable/example/)
but this fails specifically on the callback tests because
the normal spot for p is changed, and so it needs to do this
interface on the other argument. This is simply not a good
way to make the code easy to maintain. Instead, the callback
code needs to be normalized in order to have the same
argument structure as the other codes.
But this was done for a reason. The reason why p and dy are
flipped in the callback code is because it is trying to
compute derivatives in terms of p, keeping y as a constant.
The objects being differentiated are
https://github.com/SciML/SciMLSensitivity.jl/blob/master/
src/callback_tracking.jl#L466-L496.
You can see `(ff::CallbackAffectPWrapper)(dp, p, u, t)`
flips the normal argument order, but it's also doing
something different, so it's not `u,p,t` instead its `p,u,t`
but it's because it's calculating `dp`, i.e. this is a
function of `p` (keeping u and t constant) and then computing
the `affect!`'s change given `p`, and this is what we want
to differentiate. So it's effectively hijacking the same
`vecjacobian!` call in order to differentiate this function
w.r.t. p by taking its code setup to do `(du,u,p,t)` and
then calling the same derivative now on `(dp,p,u,t)` and
taking the output of the derivative w.r.t. the second
argument.
But this is very difficult to maintain if `p` needs to be
treated differently since it can be some non-vector argument!
So we should normalize all of the functions here to use the
same ordering i.e. `(ff::CallbackAffectPWrapper)(dp, u, p, t)`
and then if we need to get a different derivative out of
`vecjacobian!`, it should have a boolean switch of the
behavior of what to differentiate by. But this would make it
so SciMLStructures code on the `p` argument always works.
Now this derivative does actually exist, the `dgrad` argument
is used for the derivative of the output w.r.t. the p
argument, but if you look at the callback call again:
vecjacobian!(
dgrad, integrator.p, grad, y, integrator.t, fakeSp;
dgrad = nothing, dy = nothing
)
it's making dgrad=nothing. The reason why it's doing this is
because we only want that derivative, so we effectively want
the first argument (the normal derivative accumulation ddu) to
be nothing, but `vecjacobian!` calls do not support that? It
seems like they do have dλ=nothing branches, so it should work
to flip the arguments back to the right ordering and then just
setup to use the dgrad arguments with a nothing on the dλ, but
this should get thoroughly tested. So do this refactor in
isolation in order to get all of the callback tests passing
with a less hacky structure, and then the SciMLStructures PR
should be put on top of that. All 4 of those PRs should be
able to be closed if the p just supports the SciMLStructures
(they are all almost the same).
On these kinds of queries, the PR is quite usable the first time.