Memory usage while solving `JumpProblem` increases with run time, even with `save_positions=(false,false)`

I am running into problems regarding memory allocation that depends on the actual runtime of the solver of a stochastic jump process (i.e. it depends on T that defines tspan(0.0,T)). This gives notable performance drops when multi-threading a JumpProblem() using EnsembleProblem(args, EnsembleThreads()).

To illustrate the problem, I have a MWE that I present below. It highlights the memory issue without defining an ensemble problem. Note that the real problem I am solving is more complex and there the memory issues actually lead to decreased performance when multi-threading and/or using distributed solutions (using EnsembleThreads() and/or EnsembleDistributed()).

(All benchmarks below are subsequent calls to the functions. My code has all this in a single module, but I made it into a MWE for clarity.)

Minimal working example

As an illustration/MWE of my problem, let us consider a very simple 2-species Lotka-Volterra system with a carrying capacity K. This is just the standard 2-species generalized Lotka-Volterra model with unit interaction a_{xy} = a_{yx} = 1 and K_x = K_y = K.

\frac{dx}{dt} = \eta x \left(1 - \frac{x+y}{K}\right) \\ \frac{dy}{dt} = \eta y \left(1 - \frac{x+y}{K}\right)

For this problem, we expect simple logistic growth with rate \eta until x+y = K, after which the system is in a stable state.
Let us define the function using Catalyst.jl, and solve the ODE directly:

function lotkavolterra(;T::Float64=100.0, nsaves::Int64=1)
    function growth(Ī·,X,Y,K)
        return Ī·*(1-(X+Y)/K)
    end
    
    lv_model = @reaction_network begin
        growth(Ī·,X,Y,K), X --> 2X
        growth(Ī·,Y,X,K), Y --> 2Y
    end

    p = (:Ī· => 0.1, :K => 1000)
    u0 = [:X => 50, :Y => 100]
    tspan = (0.0,T)
    Ī”t = (tspan[end]-tspan[begin])/nsaves
    tsave = tspan[begin]:Ī”t:tspan[end]

    odeprob = ODEProblem(lv_model, u0, tspan, p)
    odesol = solve(odeprob, Tsit5(); saveat=tsave)
    return odesol
end
julia> sol = lotkavolterra(T=100.0, nsaves=100);
julia> plot(sol)

image
So far so good.

Additionally, we expect Catalyst.jl to build non-allocating functions (see also this answer), so the total memory allocation should only depend on nsaves, not on T.
(I am only leaving in the Memory estimate, as the time is at the moment not of interest.)

julia> using BenchmarkTools
julia> @benchmark lotkavolterra(T=100.0, nsaves=10)
BenchmarkTools.Trial: 10000 samples with 6 evaluations.
 Memory estimate: 6.91 KiB, allocs estimate: 59.
julia> @benchmark lotkavolterra(T=100.0, nsaves=100)
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Memory estimate: 17.50 KiB, allocs estimate: 150
julia> @benchmark lotkavolterra(T=1000.0, nsaves=100)
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Memory estimate: 17.50 KiB, allocs estimate: 150.

So far so good.
Now we simulate the Lotka-Volterra system by defining a JumpProblem, and we are now only interested in the final state, thus:

function lotkavolterra(;T::Float64=100.0)
    function growth(Ī·,X,Y,K)
        return Ī·*(1-(X+Y)/K)
    end
    
    lv_model = @reaction_network begin
        growth(Ī·,X,Y,K), X --> 2X
        growth(Ī·,Y,X,K), Y --> 2Y
    end

    p = (:Ī· => 0.1, :K => 1000)
    u0 = [:X => 50, :Y => 100]
    tspan = (0.0,T)
    Ī”t = (tspan[end]-tspan[begin])/nsaves
    tsave = tspan[begin]:Ī”t:tspan[end]

    discrete_prob = DiscreteProblem(lv_model, u0, tspan, p)
    jump_prob = JumpProblem(lv_model, discrete_prob, Direct())
    jsol = solve(jump_prob, SSAStepper())
    return jsol
end

First to double check:

>julia sol = lotkavolterra(T=100.0); plot(sol)

image
Seems good.

Memory usage increases as total time increases

Now the problem. What I expect is to see Catalyst.jl build non-allocating functions, similar to solving it with an ODE, but now I actually observe a dependence on the total run time T. Again omitting runtime information and only saving the final state by adding save_positions=(false,false) to the definition of JumpProblem(...), i.e.

    jump_prob = JumpProblem(lv_model, discrete_prob, Direct(); save_position=(false,false))
julia> @benchmark lotkavolterra(T=100.0)
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Memory estimate: 41.64 KiB, allocs estimate: 1703.
julia> @benchmark lotkavolterra(T=10.0)
BenchmarkTools.Trial: 10000 samples with 5 evaluations.
 Memory estimate: 8.78 KiB, allocs estimate: 301.

Why does the memory estimate increase?

Investigations

Interestingly, when I change the growth rate function to an in-place version:

function growth!(Ī·,X,Y,K)
    return Ī·*(1-(X+Y)/K)
end

nothing really changes ā€“ as in, I get the same increase in memory and allocations.
What is more interesting is that when the specify carrying capacity term is removed (as in the examples of JumpProcesses), the memory does not increase! Other solves, such as RSSA() also have the same problem.

Question

What is happening here? Why does the memory usage increase when the runtime increases when I define a JumpProblem() from a DiscreteProblem(), even though only the last state is saved by setting save_positions=(false,false). Does this have to do with the interpolation that is done internally? Is there any way to have terms, such as the carrying-capacity term in this example, without increasing the memory usage?
I ask because my real problem is very similar ā€“ basically an n-species Lotka-Volterra model with different carrying capacity terms, and recall I am constantly running into memory-related issues, especially when turning the JumpProblem() into an EnsembleProblem() for multi-threading and/or distributed computing later down the line.

Any help or tips would be greatly appreciated, thanks!

Full code

Module
module TestJumpProcesses

export lotkavolterra

using JumpProcesses
using Catalyst
using DifferentialEquations
using BenchmarkTools

function lotkavolterra(;T::Float64=100.0, nsaves::Int64=1)
    function growth(Ī·,X,Y,K)
        return Ī·*(1-(X+Y)/K)
    end
    
    lv_model = @reaction_network begin
        growth(Ī·,X,Y,K), X --> 2X
        growth(Ī·,Y,X,K), Y --> 2Y
    end

    p = (:Ī· => 0.1, :K => 1000)
    u0 = [:X => 50, :Y => 100]
    tspan = (0.0,T)
    Ī”t = (tspan[end]-tspan[begin])/nsaves
    tsave = tspan[begin]:Ī”t:tspan[end]

    @info "Benchmarking ODE problem..."
    odeprob = ODEProblem(lv_model, u0, tspan, p)
    odesol = solve(odeprob, Tsit5(); saveat=tsave)
    bode = @benchmark solve($odeprob, Tsit5(); saveat=$tsave)

    @info "Benchmarking jump problem..."
    discrete_prob = DiscreteProblem(lv_model, u0, tspan, p)
    jump_prob = JumpProblem(lv_model, discrete_prob, Direct(); save_positions=(false,false))
    sol = solve(jump_prob, SSAStepper())
    bjump = @benchmark solve($jump_prob, SSAStepper())

    return bode, bjump
end

Open an issue on JumpProcesses.jl

The memory use with EnsembleProblems is probably because of it deepcopying the JumpProblem for each simulation (when using multi-threading):

You are benchmarking a lot of stuff beyond just calling solve. Do you see a memory use difference if you just benchmark solve using problems that only differ in their final time?

Here is what I see just benchmarking solve:

julia> function lotkavolterra(;T::Float64=100.0)
           function growth(Ī·,X,Y,K)
               return Ī·*(1-(X+Y)/K)
           end
           
           lv_model = @reaction_network begin
               growth(Ī·,X,Y,K), X --> 2X
               growth(Ī·,Y,X,K), Y --> 2Y
           end

           p = (:Ī· => 0.1, :K => 1000)
           u0 = [:X => 50, :Y => 100]
           tspan = (0.0,T)

           discrete_prob = DiscreteProblem(lv_model, u0, tspan, p)
           jump_prob = JumpProblem(lv_model, discrete_prob, Direct(), save_positions = (false,false))
           return jump_prob
       end
lotkavolterra (generic function with 1 method)

julia> jprob = lotkavolterra(T = 10.0)
JumpProblem with problem DiscreteProblem with aggregator Direct
Number of jumps with discrete aggregation: 2
Number of jumps with continuous aggregation: 0
Number of mass action jumps: 0


julia> @btime solve($jprob, $(SSAStepper()))
  8.750 Ī¼s (243 allocations: 7.42 KiB)
retcode: Success
Interpolation: Piecewise constant interpolation
t: 2-element Vector{Float64}:
  0.0
 10.0
u: 2-element Vector{Vector{Int64}}:
 [50, 100]
 [111, 235]

julia> jprob2 = lotkavolterra(T = 100.0)
JumpProblem with problem DiscreteProblem with aggregator Direct
Number of jumps with discrete aggregation: 2
Number of jumps with continuous aggregation: 0
Number of mass action jumps: 0


julia> @btime solve($jprob2, $(SSAStepper()))
  60.208 Ī¼s (1703 allocations: 41.64 KiB)
retcode: Success
Interpolation: Piecewise constant interpolation
t: 2-element Vector{Float64}:
   0.0
 100.0
u: 2-element Vector{Vector{Int64}}:
 [50, 100]
 [287, 712]

julia> jprob3 = lotkavolterra(T = 1000.0)
JumpProblem with problem DiscreteProblem with aggregator Direct
Number of jumps with discrete aggregation: 2
Number of jumps with continuous aggregation: 0
Number of mass action jumps: 0


julia> @btime solve($jprob3, $(SSAStepper()))
  60.167 Ī¼s (1711 allocations: 41.83 KiB)
retcode: Success
Interpolation: Piecewise constant interpolation
t: 2-element Vector{Float64}:
    0.0
 1000.0
u: 2-element Vector{Vector{Int64}}:
 [50, 100]
 [395, 605]

julia> jprob4 = lotkavolterra(T = 10000.0)
JumpProblem with problem DiscreteProblem with aggregator Direct
Number of jumps with discrete aggregation: 2
Number of jumps with continuous aggregation: 0
Number of mass action jumps: 0


julia> @btime solve($jprob4, $(SSAStepper()))
  60.000 Ī¼s (1711 allocations: 41.83 KiB)
retcode: Success
Interpolation: Piecewise constant interpolation
t: 2-element Vector{Float64}:
     0.0
 10000.0
u: 2-element Vector{Vector{Int64}}:
 [50, 100]
 [412, 588]

but note that the system will eventually stop evolving when the growth rate hits zero so I guess it isnā€™t surprising there is no further change.

Finally, here is a simpler example that does not ā€œturn offā€ (i.e. reactions keep occurring for all time). I see no change in allocations even though as expected the running time increases:

julia> rn =  @reaction_network begin
           @species A(t) = 0.0
           1.0, 0 --> A
           end
Model ##ReactionSystem#484
States (1):
  A(t) [defaults to 0.0]
Parameters (0):

julia> dprob = DiscreteProblem(rn, [], (0.0, 10.0))
DiscreteProblem with uType Vector{Float64} and tType Float64. In-place: true
timespan: (0.0, 10.0)
u0: 1-element Vector{Float64}:
 0.0

julia> jprob = JumpProblem(rn, dprob, Direct(); save_positions = (false,false))
JumpProblem with problem DiscreteProblem with aggregator Direct
Number of jumps with discrete aggregation: 0
Number of jumps with continuous aggregation: 0
Number of mass action jumps: 1


julia> sol = solve(jprob, SSAStepper())
retcode: Success
Interpolation: Piecewise constant interpolation
t: 2-element Vector{Float64}:
  0.0
 10.0
u: 2-element Vector{Vector{Float64}}:
 [0.0]
 [7.0]

julia> @btime  solve($jprob, SSAStepper())
  768.817 ns (11 allocations: 1.94 KiB)
retcode: Success
Interpolation: Piecewise constant interpolation
t: 2-element Vector{Float64}:
  0.0
 10.0
u: 2-element Vector{Vector{Float64}}:
 [0.0]
 [7.0]

julia> dprob2 = DiscreteProblem(rn, [], (0.0, 100.0))
DiscreteProblem with uType Vector{Float64} and tType Float64. In-place: true
timespan: (0.0, 100.0)
u0: 1-element Vector{Float64}:
 0.0

julia> jprob2 = JumpProblem(rn, dprob2, Direct(); save_positions = (false,false))
JumpProblem with problem DiscreteProblem with aggregator Direct
Number of jumps with discrete aggregation: 0
Number of jumps with continuous aggregation: 0
Number of mass action jumps: 1


julia> sol = solve(jprob2, SSAStepper())
retcode: Success
Interpolation: Piecewise constant interpolation
t: 2-element Vector{Float64}:
   0.0
 100.0
u: 2-element Vector{Vector{Float64}}:
 [0.0]
 [105.0]

julia> @btime  solve($jprob2, SSAStepper())
  3.923 Ī¼s (11 allocations: 1.94 KiB)
retcode: Success
Interpolation: Piecewise constant interpolation
t: 2-element Vector{Float64}:
   0.0
 100.0
u: 2-element Vector{Vector{Float64}}:
 [0.0]
 [129.0]

I also see no memory use difference on this example if I change 1.0 to 1.0 + (1 - A/100) which forces usage of a ConstantRateJump.

Yes sorry, I had included the full code in order to avoid this, but it was easy to miss. In the full code I only benchmarked the solve() and got similar results to you. Note that in both my case and in yours memory usage increases from T=10.0 versus T=100.0. Is this to be expected?

Here is what I see just benchmarking solve

It is indeed as you say, probably because my MWE was too simple it stagnates at some point.
Interestingly, the examples you provided (and some others I tried) do seem to ensure workā€¦

For example

function loggrowth(;T::Float64=100.0)
    @parameters Ī·, K
    @variables t
    @species A(t) B(t)
    rn = @reaction_network begin
        Ī·*(1 - (A+B)/K), A --> 2A
        Ī¼, B --> 0
    end
    p = (:Ī· => 1, :Ī¼ => 1e-6, :K => 1e7)
    u0 = [:A => 1, :B => 1e5]
    tspan = (0.0, T)

    dprob = DiscreteProblem(rn, u0, tspan, p)
    jprob = JumpProblem(rn, dprob, Direct(); save_positions=(false,false))
    return jprob
end
julia> long_prob = loggrowth(T=1e7)
JumpProblem with problem DiscreteProblem with aggregator Direct
Number of jumps with discrete aggregation: 1
Number of jumps with continuous aggregation: 0
Number of mass action jumps: 1


julia> long_sol = solve(long_prob, SSAStepper())
retcode: Success
Interpolation: Piecewise constant interpolation
t: 2-element Vector{Float64}:
 0.0
 1.0e7
u: 2-element Vector{Vector{Float64}}:
 [1.0, 100000.0]
 [9.999998e6, 2.0]

julia> @btime solve($long_prob, SSAStepper())
  157.063 ms (11 allocations: 2.09 KiB)
retcode: Success
Interpolation: Piecewise constant interpolation
t: 2-element Vector{Float64}:
 0.0
 1.0e7
u: 2-element Vector{Vector{Float64}}:
 [1.0, 100000.0]
 [9.999995e6, 5.0]
julia> short_prob = loggrowth(T=1e4)
JumpProblem with problem DiscreteProblem with aggregator Direct
Number of jumps with discrete aggregation: 1
Number of jumps with continuous aggregation: 0
Number of mass action jumps: 1


julia> short_sol = solve(short_prob, SSAStepper())
retcode: Success
Interpolation: Piecewise constant interpolation
t: 2-element Vector{Float64}:
     0.0
 10000.0
u: 2-element Vector{Vector{Float64}}:
 [1.0, 100000.0]
 [9.900929e6, 99071.0]

julia> @btime solve($short_prob, SSAStepper())
  152.330 ms (11 allocations: 2.09 KiB)
retcode: Success
Interpolation: Piecewise constant interpolation
t: 2-element Vector{Float64}:
     0.0
 10000.0
u: 2-element Vector{Vector{Float64}}:
 [1.0, 100000.0]
 [9.901011e6, 98989.0]

Question

So, what is different in the code that I provided earlier, and why does this one show memory increases? I will play around a bit more and come back to this, as the difference between the MWEs and my real code is very odd.

Yeah, I have no idea yet why your original example shows allocation differences. It is indeed weird.

Just so you are aware:

    @parameters Ī·, K
    @variables t
    @species A(t) B(t)

in loggrowth is not doing anything in your latest example. What Catalyst version are you on? In the latest version the DSL would interpret any substrate/product as a species (A and B), and everything else as a parameter automatically. That is why your loggrowth function works ok. If you put those commands inside the @reaction_network macro then they would be used.

If you want the @reaction_network macro to see an external variable you have to interpolate it, like $A, when using it.

Just so you are aware:

    @parameters Ī·, K
    @variables t
    @species A(t) B(t)

in loggrowth is not doing anything in your latest example. What Catalyst version are you on? In the latest version the DSL would interpret any substrate/product as a species (A and B), and everything else as a parameter automatically. That is why your loggrowth function works ok. If you put those commands inside the @reaction_network macro then they would be used.

I did not know this, I was just following the latest tutorials that the documentation Catalyst.jl provided (or perhaps it was JumpProcesses.jl) ā€“ but it appears this notation is indeed deprecated?

What version of Catalyst.jl?

(examples) pkg> status Catalyst
Status `~/<path/to/project>/Project.toml`
  [479239e8] Catalyst v13.2.0

Check out:

https://docs.sciml.ai/Catalyst/stable/catalyst_functionality/dsl_description/#Explicit-specification-of-network-species-and-parameters

https://docs.sciml.ai/Catalyst/stable/catalyst_functionality/dsl_description/#Specifying-alternative-time-variables-and/or-extra-independent-variables

https://docs.sciml.ai/Catalyst/stable/catalyst_functionality/dsl_description/#dsl_description_interpolation_of_variables

1 Like

Can you point me to the tutorial you are referencing? It sounds like something didnā€™t get updated (since you are on the latest versionā€¦).

I wish I could but it seems all of the links in my history now point to the updated version. Do you know when the docs was updated? I learned about Catalyst.jl about ~2 weeks ago and have not went back to check on any updated tutorials after following a few around that time. I do remember it being like this (perhaps it was in a blog post?). I will dig into my history a bit more and edit or reply if I find it.

But my guess it was this one, as I was generating reaction networks programmatically:
https://docs.sciml.ai/Catalyst/stable/catalyst_functionality/programmatic_CRN_construction/

I donā€™t think anything has been updated in several weeks, and nothing in the newer notation for longer than that.

In any case, it sounds like you are now getting the correct versions. If you find something wrong somewhere just let us know.

Iā€™ll try to play with your example with the extra allocations more, but if you make any more progress on distinguishing what features induce the allocations please do also let me know!

1 Like

@ChrisRackauckas could this be the issue? BenchmarkTools does report allocations in calling this function:

Iā€™m noticing the getindex here generates code that is a union type (since p.affects! is a tuple of functions). Iā€™m not sure how to get around this though, unless we want to just drop non-FunctionWrapper versions of Direct. (There is also FunctionWranglers.jl, but it seems unmaintainedā€¦)

Switching to DirectFW reduces this but doesnā€™t eliminate the issue.

Not sure if itā€™ll help, but I have found a problem that highlights the issue very clearly (if I am not mistakenā€¦). It concerns a problem that is very close (basically a simplification) to the one presented in equation (1) of this paper. That is, a problem of plasmid maintenance in bacteria:

\begin{align} \frac{dF}{dt} &= \eta F \left( 1 - \frac{F+P}{K} \right) - \gamma F P + \rho P - \mu F, \\ \frac{dP}{dt} &= \eta P \left( 1 - \frac{F+P}{K} \right) + \gamma F P - \rho P - \mu P \end{align}

where \eta the intrinsic growth rates for the plasmid-free (F) and plasmid-carrying (P) populations respectively. \gamma is the plasmid infection rate, \rho the recovery rate (through segregation error) and \mu the background mortality rate.

The code for this system using Catalyst.jl:

function catalyst_plasmid(;T::Float64=250.0)    
    function rate(Ī·,X,Y,K)
        return Ī·*(1-(X+Y)/K)
    end
    
    fp_model = @reaction_network begin
        rate(Ī·,F,P,K), F --> 2F   # Background reproduction F
        rate(Ī·,P,F,K), P --> 2P   # Background reproduction P  !NOTE: order of F & P are switched
        Ī¼, F --> 0                # Background mortality F
        Ī¼, P --> 0                # Background mortality P
        Ī³, F + P --> 2P           # Infection (conjugation)
        Ļ, P --> F                # Recovery (segregation error)
    end    
    p = (:Ī· => 1.0, :Ī¼ => 0.1, :Ī³ => 1e-5, :Ļ => 0.01, :K => 1e4)
    u0 = [:F => 100, :P => 10]
    tspan = (0.0, T)

    dprob = DiscreteProblem(fp_model, u0, tspan, p)
    jump_prob = JumpProblem(fp_model, dprob, Direct(); save_positions=(false,false))
    return jump_prob
end

and the benchmarks

julia> using Catalyst, DifferentialEquations

julia> stepper = SSAStepper()
SSAStepper()

julia> short_prob = catalyst_plasmid(T=1e3);

julia> short_sol = solve(short_prob, stepper); 

julia> @btime solve($short_prob, $stepper)
  87.697 ms (1805709 allocations: 41.33 MiB)

julia> long_prob = catalyst_plasmid(T=1e5); # 100 times as long

julia> long_sol = solve(long_prob, stepper);

julia> @btime solve($long_prob, $stepper)
  10.803 s (180003301 allocations: 4.02 GiB)

So increasing runtime by 100 also seems to increase memory usage by a factor of 100. There must be wrong somewhere ā€“ letā€™s hope it is in my implementation!
Sadly, I cannot remove the carrying-capacity term as this allows for infinite growth and, rightfully, crashes the REPL.

For the non-functionwrapper versions we should codegen to manual split this to runtime branching.

if idx === 1
  @inbounds p.affects![1](integrator)
elseif idx === 2
  @inbounds p.affects![2](integrator)
...

would make it go away.

Yeah, though even using FunctionWrappers Iā€™m still seeing small allocations.

Could somehow the generated function from Symbolics be allocating a small amount here?

First time called compilation? Not all affects will always be called.

Yeah, when calling via the FunctionWrapper to call the affect! function in DirectFW Iā€™m still getting a reported single 32 byte allocation. It occurs consistently, not just on the first call.

@ChrisRackauckas can confirm this pure JumpProcesses version of the first network does not exhibit this issue. So this seems to be coming from the Symbolics codegen or RuntimeGeneratedFunctions.

julia> function lotkavolterradirect(; T = 100.0)
           r1(u,p,t) = growth(p[1], u[1], u[2], p[2]) * u[1]
           r2(u,p,t) = growth(p[1], u[2], u[1], p[2]) * u[2]
           aff1!(int) = int.u[1] += 1
           aff2!(int) = int.u[2] += 1
           p = (.1, 1000)
           u0 = [50, 100]
           tspan = (0.0, T)
           dprob = DiscreteProblem(u0, tspan, p)
           jprob = JumpProblem(dprob, Direct(), ConstantRateJump(r1,aff1!), ConstantRateJump(r2, aff2!); save_positions = (false,false))
           jprob
       end
lotkavolterradirect (generic function with 1 method)

julia> jprob = lotkavolterradirect(T = 10.0)
JumpProblem with problem DiscreteProblem with aggregator Direct
Number of jumps with discrete aggregation: 2
Number of jumps with continuous aggregation: 0
Number of mass action jumps: 0


julia> sol = solve(jprob, SSAStepper())
retcode: Success
Interpolation: Piecewise constant interpolation
t: 2-element Vector{Float64}:
  0.0
 10.0
u: 2-element Vector{Vector{Int64}}:
 [50, 100]
 [102, 216]

julia> @btime solve($jprob, SSAStepper())
  5.858 Ī¼s (11 allocations: 1.03 KiB)
retcode: Success
Interpolation: Piecewise constant interpolation
t: 2-element Vector{Float64}:
  0.0
 10.0
u: 2-element Vector{Vector{Int64}}:
 [50, 100]
 [109, 228]

julia> jprob2 = lotkavolterradirect(T = 100.0)
JumpProblem with problem DiscreteProblem with aggregator Direct
Number of jumps with discrete aggregation: 2
Number of jumps with continuous aggregation: 0
Number of mass action jumps: 0


julia> @btime solve($jprob2, SSAStepper())
  30.625 Ī¼s (11 allocations: 1.03 KiB)
retcode: Success
Interpolation: Piecewise constant interpolation
t: 2-element Vector{Float64}:
   0.0
 100.0
u: 2-element Vector{Vector{Int64}}:
 [50, 100]
 [402, 598]

Example:

julia> jprob = lotkavolterra(T = 10.0)
JumpProblem with problem DiscreteProblem with aggregator Direct
Number of jumps with discrete aggregation: 2
Number of jumps with continuous aggregation: 0
Number of mass action jumps: 0

julia> aff1 = jprob.discrete_jump_aggregation.affects![1]
RuntimeGeneratedFunction(#=in ModelingToolkit=#, #=using ModelingToolkit=#, :((var"##MTKIntegrator#1026",)->begin
          #= /Users/isaacsas/.julia/packages/SymbolicUtils/1JRDc/src/code.jl:350 =#
          #= /Users/isaacsas/.julia/packages/SymbolicUtils/1JRDc/src/code.jl:351 =#
          #= /Users/isaacsas/.julia/packages/SymbolicUtils/1JRDc/src/code.jl:352 =#
          begin
              Ėā‚‹out = (var"##MTKIntegrator#1026").u
              Ėā‚‹arg1 = (var"##MTKIntegrator#1026").u
              Ėā‚‹arg2 = (var"##MTKIntegrator#1026").p
              t = (var"##MTKIntegrator#1026").t
              begin
                  begin
                      #= /Users/isaacsas/.julia/packages/Symbolics/VIBnK/src/build_function.jl:520 =#
                      #= /Users/isaacsas/.julia/packages/SymbolicUtils/1JRDc/src/code.jl:399 =# @inbounds begin
                              #= /Users/isaacsas/.julia/packages/SymbolicUtils/1JRDc/src/code.jl:395 =#
                              Ėā‚‹out[1] = (+)(1, Ėā‚‹arg1[1])
                              #= /Users/isaacsas/.julia/packages/SymbolicUtils/1JRDc/src/code.jl:397 =#
                              nothing
                          end
                  end
              end
          end
      end))

julia> int = (u = [1.0], p = (1.0, 1.0, 1.0), t = 1.0)
(u = [1.0], p = (1.0, 1.0, 1.0), t = 1.0)

julia> @btime aff1($int)
  14.194 ns (1 allocation: 48 bytes)

julia> @btime aff1($int)
  14.194 ns (1 allocation: 48 bytes)

If you take the generated function and just eval it then does it go away?