How to distribute computation over different CPU's of my desktop

Hi!
I am doing some work in simulation which requires numerical integration over high-dimensional differential equations, most of it uses functions from the DynamicalSystems.jl module.
However, the computation gets slow on my desktop. Is there a way to parallelize the computation over the different CPU’s of my machine (** its an octa-core processor ) ? I’m new to this language and have very little knowledge of parallelizing programs.

Thanks!

Are you sure that you have maxed out the single thread performance of your code? Parallelization tends to scale less than linearly, so you might be lucky if you get a 3-4x speedup, while there are lots of performance pitfalls which could lead to an order of magnitude slowdown.

1 Like

Hi !
Is there way to know that I have maxed out performance limit on a single thread ?

Your best bet would be to profile the single threaded code and see where it’s spending most of its time

1 Like

thanks! could you just elaborate a little,
please.

Have a look at the docs on profiling there are a number of different add on tools for visualization of the profile information. Several are mentioned in the docs.

https://docs.julialang.org/en/v1/manual/profile/

Not in general, but as @dlakelan says profiling can help you understand where the most potential for optimization might be found. Do you have any particular reason for saying the “computation gets slow”, i.e. benchmarks from similar models in Julia or other languages that run materially faster than what you are seeing, or some theoretical results on the expected runtime?

Also make sure you have read, understood, and applied all of the Performance Tips · The Julia Language

Hi @pafloxy ,
this would be the entrypoint in the julia documentation to parallel computing: Parallel Computing · The Julia Language
It starts with a brief overview over the (quite) different approaches of parallel computing that julia supports, namely coroutines, multithreading and distributed computing (and GPUs).

If you want to invest a bit into a deeper understanding of the topic, and since your application also involves solving differential equations, I can only highly recommend this lecture series by Chris Rackauckas: https://www.youtube.com/playlist?list=PLCAl7tjCwWyGjdzOOnlbGnVNZk0kB8VSa
(you can skip over some parts, e.g. Automatic Differentiation if that is not of interest to you. Most things are about parallel computing and solving differential equations with julia)

It starts by pointing out that before going parallel, the first thing you want to do is to optimize the serial (non-parallel) version of your code, as the others have pointed out already.

2 Likes

It might also help to try to come up with a minimal working example that shows what you need to compute.

Then people who know about this stuff can help to tell if the performance you are seeing is expected or if some optimizations are possible.

1 Like

Okay Thanks for the information, I’ll surely look into it.

Before going parallel, be sure to follow the Performance Tips · The Julia Language.

1 Like

OKay! I can share my code if needed. The work mostly consists in integrating high-dimensional differential equations and to compute the spectrum of Lyapunov exponents, now though the DynamiclaSystems.jl module has inbulit methods and functions for calculating such quantities I have Implemented some of this myself in different manners ~~
The involved computation arises from ~~

  1. the differential equations are generated randomly and are high-dimensional and should be integrated long enough to capture chaotic behaviour.
  2. Many such instances of equation generated, as we are doing kinda Monte-Carlo sampling over the space of possible equations… (** as of now 50 cases per dimension )
  3. We are doing this for calculation for many-dimensional systems… like 10, 20, 30… and so on.

Just one more thing …usually use Jupyter notebooks for my programs… does switching over to script files and running them over REPL would provide any advantage in regard to performance ?

No. What is crucial is to follow those performance tips. Particularly the first ones. (In a notebook it is more natural to use global variables, which are performance killers)

1 Like

@pafloxy Linking my own post with additional info about multithreading and parallel computing. Maybe not the whole thread but some of the documents listed there you find useful: Distributing loops across threads manually (something like OpenMP) - #4 by j_u

1 Like