Inconsistent typing making adding arrays and adding a vector to the end of an array difficult

I am in awe of how immensely helpful you’ve been, Rafael, thank you! While your code cuts down execution time on my system, it doesn’t quite go down to 0.03s, but ~0.18s is still pretty good, significantly faster than my JS code (which takes about 0.77s).

3 Likes

As RKF45 is meant to solve ODE systems of any size, conds should probably be generalized (i.e. not have its size fixed at 3 elements long). I’ve tried changing to conds::SVector{Float64}, but that gives an error (namely:

MethodError: no method matching rkf45(::typeof(Lorenz), ::NamedTuple{(:sigma, :beta, :rho),Tuple{Float64,Float64,Float64}}, ::Float64, ::Float64, ::SArray{Tuple{3},Float64,1,3}, ::Float64, ::Float64)

). Any ideas on how to resolve this problem?

StaticArrays is a package that makes the size of the array available to the compiler for optimizations. SVector{3, UInt} means “This vector holds 3 UInt”. There’s no need to restrict the argument to that though (it doesn’t allow for more optimizations, the performance will depend on the types of the objects passed in, not on the types of the assertions) - if you remove the type assertions from the function arguments and simply pass in objects of the type you want (SVector in this case), the function will be compiled optimized for the types that are passed in.

So if you remove the type assertion and simply make conds = @Svector [x0, y0, z0, w0], it should work fine for the 4 dimensional case.

3 Likes

Ah, so for SVectors specifying the type in function definitions does not further optimize the performance of the function?

Asserting function argument types doesn’t optimize anything, ever. It only restricts what types are allowed to be used to call the function at all and is used to disambiguate between different methods of the same function with the same arity (=number of arguments).

Note that there is an upper limit to the usefulness of SVectors - at some point, the large known size can increase compile times dramatically.

6 Likes

@time includes compilation time on the first run. Try using @btime from BenchmarkTools.jl for somewhat similar functionality instead.

2 Likes

Thank you Sukera.
With @btime: 13.9 ms (39 allocations: 9.75 MiB)

Interestingly, Wikipedia (Lorenz system - Wikipedia) provides Julia code for this same Lorenz problem that uses DifferentialEquations.jl.
Fyi, this problem then seems to run 12x faster than the faster implementation above, with a @btime of 1.2 ms. If we are comparing apples to apples then there is a lot of room for improvement.
See the code here for your reference:

using DifferentialEquations, ParameterizedFunctions, Plots
using BenchmarkTools

lorenz = @ode_def begin                  # define the system
 dx = σ * (y - x)
 dy = x * (ρ - z) - y
 dz = x * y - β*z
end σ ρ β

u0 = [1.0,1.0,1.0]                       # initial conditions
tspan = (0.0,60.0)                       # timespan
p = [10.0,28.0,8/3]                      # parameters

prob = ODEProblem(lorenz, u0, tspan, p)  # define the problem
sol0 = solve(prob, reltol=1e-9)          # solve it
# @btime solve(prob, reltol=1e-9)
plot(sol0, vars = (1, 2, 3))
5 Likes