What is the best way to use a derived value (not part of the solution vector) from the previous time step. I was thinking of passing it like a parameter and updating at each time.

I did something like:

```
#Approach1
function dynamics!(du, u, p, t)
p_new=calc_funtion(u,p)
du[1]=exp(p_new)
p=p_new
end
```

I am worried if this might change the value of p even when the step is rejected by the solver due to error tolerance.

Another way I thought about it was to supply it to one of the solution vector and then apply the change in a callback.

```
#Approach2
function dynamics!(du, u, p, t)
p_new=calc_funtion(u,p)
du[1]=exp(p_new)
du[2]=p_new
end
function condition(u,t,integrator)
true
end
function affect!(integrator)
integrator.p=(integrator.u[2]-integrator.uprev[2])/(integrator.t-integrator.tprev)
end
```

But I am afraid that this might make the solution vector too long making the solution slower.

calc_function involves a lot of functions within, so i’d prefer not to call it again in affect!.

What would be a good way to achieve this functionality?

Also, I might have to use this parameter in the condition function too. Would that make one approach favorable over the other?

I can make a minimal working example but I am not sure how to reproduce time-step rejection without making the problem too complex.

Thanks

Keeping track of that parameter in this way does not seem like a good idea to me. You can’t really know what timepoints a solver will query or in what order I think.

It sounds like you might have a delay differential equation. I suggest having a look at the tutorial and see whether that helps you

I did look at DDE solvers. I thought I could make it work by passing the parameter to one of the solution derivatives. But I need the value at previous step, which is not a defined delay value. Also, I think the interpolation would be unnecessary if I am just concerned with previous step.

There is no such thing as a “value at a previous step”. It’s an ill-defined quantity. It could be anything because it’s method dependent. How far away is the step? `dt`

changes and is ill-defined. In general, models written like this are not convergent.

The callback you are writing calculates a delayed derivative, which a DDE can give you.

I understand that in general context you are absolutely correct. My application is very specific and requires these quantities. I had to manually write an ode solver to achieve these in MATLAB, but I was wondering if I could achieve something similar here without doing that.

Let me try to explain a simpler version of what is happening geometrically, maybe you could advise a better way to do this.

These p’s are points of contact between two circles on paper. Two circles can have (at max) two points of contact at a given time, say A or B. Among the variables being evolved with time, uA is concerned with A and uB with B. My algorithm to find contact can give either [A,B] or [B,A] and I’d need to track if such a switch has happened and re-switch it to maintain continuity and calculate uA and uB.

That is why, I am looking for a reliable way to know the values at previous step.

Is DDE still a better choice? Thanks for your help.

So in each timestep `t`

you run some computation that return a vector of ill-defined order either `[A(t),B(t)]`

or `[B(t),A(t)]`

. But these change somewhat slowly, so you want to compare the computed vector with the “previous” one `[A(t-dt),B(t-dt)]`

and order it such that the distance between the vectors is minimized or something. Did I get that right?

I think a DDE would still be the canonical way of achieving that (but listen to what Chris says!). Perhaps the easiest way of solving your problem would be to just step the integrator yourself and update the parameters with the previous solution after each step.

See here for documentation:

Sketch of what you could do:

```
# instead of calling solve(prob, alg; kwargs)
integrator = init(prob, alg; kwargs...)
while integrator.t < tmax
step!(integrator) # performs a step of variable size
# update integrator.p
integrator.p.prevstep .= integrator.u # e.g.
end
```

This is similar to your callback-based solution, which I think should be fine as well.

Edit: I assume you cannot just `sort`

your state vector because you cannot define some “global” order?

What’s the reason this is done via a discretecallback instead of a continuouscallback?

Yeah it can not be sorted because at any time it could be just A or just B or both in any order. To impose something like B>A, i’d still need information from earlier. [A is the coordinate of first p.o.c. and B is the other].

I’ll look at the integrator interface. Do you know if there is any speed/accuracy difference between stepping the integrator or using solve?

I want to keep track of the points of contact. It is just an information I require from previous iteration. I thought if I want to do this at every step, it’d be better to go with discrete callback as there is no event that triggers it (other than the existence of contact).

I am not sure if I understand why would it matter for it to be continuous or discrete.

No. If you call `solve()`

, that is basically equivalent to do `step!(integrator, tmax)`

On a related note, is it possible to ask the integrator to ignore a part of solution vector while checking for errors?

I understand if this seems mathematically unsound but it would reduce some unnecessary stepping.

You can set the tolerance really high on some parts by using a vector tolerance.