Hi!
I am using DifferentialEquations.jl for simulating trajectories over high-dimensional vector fields.
For my case, the time-evolution function for each coordinate is generated by a pre-defined random function (** same function for each coordinate) which takes in as input the current state of the system.
Here nonlinear_2(x,index) generates the vector field for each index
And what is the question or the problem?
It would also help if you enter the code not as a screenshot but rather as a copy-pastable text. Use triple backticks as shown in item 2 at Please read: make it easier to help you.
Hi!
You can see that I’ve used a for
loop in the deff
function over the range of dimensions dim
, I wanted to ask if there was any better way to do this in regard to computational efficiency?
I’ll reupload the code once I make some corrections!
It is a bit strange that G1
and G2
are randomly generated in each function call. Are you sure that’s what you want?
In my experience, it is sometimes even better to use for loops. That often avoids to accidentally allocate new vectors.
Assuming that G1 and G2 are constants, you could do
dim = 10
p = (G1 = randn(dim,dim), G2 = randn(dim,dim,dim), dim=dim)
function deff(dx, x, p, t)
for j = 1:p.dim
dx[j] = p.G1[j,:]' * x + x' * p.G2[j,:,:] * x + x[j]^3
end
end
x0 = randn(dim)
tspan = (0., 10.)
ODEProblem(deff, x0, tspan, p)
If you want to be fancy with passing parameters back and forth, you can use UnPack.jl
Yep sorry! I noticed that error ( ** G1 & G2
being called everytime ) after uploading the question.
Also thanks for your opinion in support of using for
loops.
However, I would like to ask if can similar “summation over some indices” can be done for higher rank tensors ?
dx[j] = p.G1[j,:]' * x + x' * p.G2[j,:,:] * x + x[j]^3
Here x' * p.G2[j,:,:] * x
works as it is a matrix-vector multiplication, but is there an analouge for higher rank tensors ? Also could you suggest any methods to reduce computational time as I’ll be increasing dim
upto 60 for my project.
You could use https://github.com/Jutho/TensorOperations.jl for avoiding the summation.
For performance, the most common advice for ODE problems is that you should avoid allocations and avoid access to global variables (as you probably know already, since you did it.)
Also, check out Performance Tips · The Julia Language
and Frequently Asked Questions · DifferentialEquations.jl
Use BenchmarkTools.jl for performance measurements.
Further vectorisation could lead to a small increase of speed, but often normal for loops are just as fast.
You can add @inbounds for i = 1:p.dim
to avoid bound checking. There is also @simd
to encourage better CPU instructions.
I would guess, that for your problem a simple implementation like the one above is sufficient. You could rather experiment with finding the best ODE solver, which could have quite some impact.
okay! thanks very much