For Monte Carlo simulation with same code same algorithm, how fast is Julia compared with Fortran?

You might try out the Julia v1.7.0-beta3 - I’m using this version and am pretty happy with it. The default RNG changed and should improve the performance of your RNG-heavy code quite a bit.

Some additional aspect that could improve the performance of your code: I saw that you included some explicit bounds checks. By default, Julia will also check bounds on every indexing operation into arrays, such as musigma[k].mu[1,1] = mu[k,1]. To avoid that, you can use @inbounds (or run Julia with command line argument --check-bounds=no to disable bounds checks globally). As you know, Fortran will not perform bounds checks automatically.

The code that you posted is probably too long such that most people will not want to look at it. If you want to increase the chance of getting help, you could consider positing smaller code snippets that people can review easily on their phone. That’s basically also true for me - it’s too long to motivate me to go through it thoroughly. After performing some profiling, you will easily find the most expensive parts in your code - extract them and post them again and I’m sure people will help you to optimize it.

As others have written in this thread, I would suggest to avoid the use of global variables. Instead, I would pack them in structs or NamedTuples and pass them around in your code. This makes it explicit which variables you will use and allows you to get some library-like behavior instead of a linear program. It will also allow you to exchange types later if that might help (to increase performance or get additional features for free such as automatic differentiation, uncertainty propagation, or simply different real types).

When Trixi.jl was initiated, I wasn’t part of the team (I joined a few months later). At first, it was written mimicking an existing Fortran code (with which we compared the performance in our paper mentioned above). Thus, it was a linear code written to perform a single simulation. To change something, you need to re-compile the Fortran code with different compile time options and/or set some flags in a parameter file which was parsed at first. Then, the code created some global variables, scratchspaces, constants, settings etc.
However, we decided to rewrite Trixi.jl in a completely different style (see also this announcement and our paper linked above). The gist is: Trixi.jl is now designed as a library. This allows us to instantiate multiple different parts of it at the same time, giving us much more flexibility. Additionally, we got rid of parameter files - a simulation setup is defined in pure Julia code. Everything that needs to be set up (such as different internal caches etc.) are created when starting a simulation and passed around. We don’t use any global constants anymore (except in some legacy experimental parts that will be re-designed in the future). Every parameter get’s bundled in some struct that we pass around. For example, the physical equations, the geometric mesh, and the numerical solver are Julia structs bundling information; they are passed around in our code. Moreover, we create a cache as NamedTuple and pass it around in basically every function that needs to access it. This is very convenient for us right now. Additionally, it allowed us to enable automatic differentiation through semidiscretizations or even complete simulations with Trixi.jl (and OrdinaryDiffEq.jl in time).

These macros are quite helpful. One of the reasons why I like Julia as much as I do is exactly that it’s possible to have these sophisticated code manipulation tools. Let me briefly comment o the examples that you mentioned. Feel free to ask whether you want to know more about some of them specifically.

  • @unpack: This is basically a convenience macro from UnPack.jl that makes it easier to unpack things from structs or NamedTuples. When passing those parameters around, you can access things using @unpack a, b, c = foo instead of a = foo.a; b = foo.b; c = foo.c. However, it doesn’t make a performance difference. In Julia v1.7, the same functionality will be available as (; a, b, c) = foo.
  • @inbounds: I mentioned that briefly above. By default, Julia will perform a bounds check before every indexing operation (unlike Fortran). If you want to avoid this and know that you will not have out-of-bounds access, use @inbounds (or disable bounds checking globally). Depending on your code, this can improve the performance significantly, in particular if it enables further optimizations such as SIMD.
  • @inline: The Julia compiler uses some heuristics to decide whether a “small” function should be inlined or not. You can nudge the compiler to do so using @inline. Depending on the use case, this will definitely improve the performance.
12 Likes