Difficulties to convert Matlab (ODE45) -> Julia

I’m sorry to interrupt your morning/afternoon/evenning, but I need some help to convert a code written in Matlab into Julia. However, I cannot figure out where/why/how is going wrong :sob:

I have a code to my research with some coupled Differential Equations that in Matlab + ODE45, that are working as expected. My goal is to speed up the code using Julia + DifferentialEquations.jl.
After much debugging, I guarantee that the first initialization provides equal final results. After that, I need help.

Maybe is the style of coding that I have to change ? Maybe I have to use to stiff option of DifferentialEquations.jl? Maybe my equations are simple unstable ? Maybe is my desktop ?

Because the code is extensive to show in this post, you can download from here.

Note: During the debugging (I cannot explain why), but in Julia version the matrix ZMM at line 100 sometimes were not created. I could not displayed on debug mode, nether I could see it in workspace variables panel. I believe to be a bug of the debugger, but maybe this information is relevant :slightly_frowning_face:

and lastly


julia> versioninfo()
Julia Version 1.3.0
Commit 46ce4d7933 (2019-11-26 06:09 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.1 (ORCJIT, haswell)
Environment:
  JULIA_EDITOR = atom  -a
  JULIA_NUM_THREADS = 4

Thank you for any suggestion

If it’s stable with ode45, then it should be stable with Tsit5, DP5, etc. While your style of coding doesn’t have to change, I can’t say it’s the easiest style to debug since the indexing is a little… intense.

Your initial condition is very constant, so that’s probably not a good place to be checking the correctness of the derivative evaluation, since applying some of those operators might give be giving zeros due to the constant-ness. I would generate a random vector and use the same random initial condition in Julia and MATLAB and then invoke the f call and see if the derivatives are the same. My guess is that they aren’t.

(Note: this coding style does heavily allocate though, so the speed will not be top notch because of all of those extra array creations. You could definitely implement this derivative function in a different way that would speed it up quite a bit)

1 Like

I’m back, and I found the error. This statement does not mean that I understand its reasoning, so, I’m here asking for help again.
I decided to implement everything again for scratch, and I figure out that this equation is having problem:

Screenshot from 2020-03-03 11-58-57

Using the Matlab implementation, its results seems right, but using my Julia version with a sum sometimes is right, sometimes is wrong. This is strange, so I need to be more precise.

First, the working source code is here.

Second, because this is was a naive attempt to translate all equation into Julia, I did no optimization.
Here the original line of code for the above equation (without the complex conjugate part to fit on screen) that you can see as a comment at line 104 of my code :
sum((k≠j)&&(k≠m) ? G[j,k]*σ⁺σᶻσ[j,m,k] + G[m,k]*σᶻσ⁺σ[j,m,k] : 0 for k=1:N)


And now that my results are strange.
I created for tests purposes a simple RungeKutta code, and depending of the number of steps the first matrix multiplication works, meanwhile, the second matrix multiplication works works 20% of time.

How do I verify that ? Comparing with the exactly expression of the Matlab.

The first expression in Matlab would be:
sum(G[:,k].*permutedims(σ⁺σσᶻ, [1 3 2])[:,:,k] for k=1:N, dims=3)[j,m]
and my naive code is:
sum((k≠j)&&(k≠m) ? G[j,k]*σ⁺σᶻσ[j,m,k] : 0 for k=1:N)

The second expression in Matlab would be:
sum(transpose(G[:,k]).*permutedims(σ⁺σσᶻ, [3 1 2])[:,:,k] for k=1:N, dims=3)[j,m]
and my naive code is:
sum((k≠j)&&(k≠m) ? conj(G[j,k]*σ⁺σᶻσ[j,m,k]) : 0 for k=1:N)

At line 108 and 109 of my new code, I compare both expression, and save true or false to take a statistics of how much of them are right later.


To make this strange behavior, run the lines 215,216,217.
For 10 and 100 steps of RK, my code does never equals the Matlab code.
Screenshot from 2020-03-03 12-20-41
For 1000 steps of RK, my first code code IS equal to matlab code, whereas the second code is equals 20% the Matlab code.
Screenshot from 2020-03-03 12-20-29

I would appreciate an explanation.
Is is my sum that is wrong, and for lucky it gets right if enough points ?
Is it some internals of Julia that is wrong ?

I haven’t looked that closely, but the difference is probably that julia uses ' to mean the adjoint, and I bet you meant a transpose. For complex arrays the difference is Julia also conjugates.

1 Like

' also means adjoint in MATLAB

1 Like

What do you mean by “works” and “wrong”? Is it like 1e-14 off? Minor changes in the order of the matmul due to multithreading stuff could cause that much of a difference (+ is not associative for example), so you’ll likely never get exactly the same on any complex code. If your model is chaotic, then even these small differences will quickly give a completely different trajectory.

1 Like

it is not 1e-14 off, is was around 1e-3

relax about the transpose, it was intentional to rotate the matrix and do not take the complex of it