GC time issues when parallelizing tyalorinteg?

Indeed, TaylorIntegration allocates a lot.

Yet, if you use the macro @taylorize to parse your ODEs, things become better:

using TaylorIntegration

const μ = 1.0
const q0 = [0.19999999999999996, 0.0, 0.0, 3.0] # a initial condition for elliptical motion
const order = 28
const t0 = 0.0
const t_max = 10*(2π) # we are just taking a wild guess about the period ;)
const abs_tol = 1.0E-20
const steps = 500000

@taylorize function kepler!(dq, q, params, t)
    r_p3d2 = (q[1]^2+q[2]^2)^(3/2)
    
    dq[1] = q[3]
    dq[2] = q[4]
    dq[3] = -(μ*q[1])/r_p3d2  # parenthesis needed to help `@taylorize`
    dq[4] = -(μ*q[2])/r_p3d2
    
    nothing
end

function task()
    t, _ = taylorinteg(kepler!, q0, t0, t_max, order, abs_tol, maxsteps=steps)
    return t[end]
end

function task_noparse()
    t, _ = taylorinteg(kepler!, q0, t0, t_max, order, abs_tol, maxsteps=steps, parse_eqs=false)
    return t[end]
end

Then I get

@time task()  # second run of task()
  0.003598 seconds (2.31 k allocations: 19.578 MiB)

@time task_noparse() # second run of task()
  0.144778 seconds (198.42 k allocations: 52.445 MiB, 64.56% gc time)

Allocations are improved almost by a factor 2.6, and the time elapsed is reduced by a factor 40. I am using Julia 1.8 and TaylorIntegration v0.9.1.

1 Like