will give you elapsed as a type representing the elapsed time in some interval (Milliseconds) from which you can extract the value as an Int using elapsed.value.
(edited based on edit to question)
However, you should also look into the @time macro, and @btime and @benchmark from BenchmarkTools.jl for other options.
If you want to benchmark julia code, you should try:
Which is much better than a simple elapsed = end_time - start_time, which will measure compilation times and is statistically not very robust …
If you’re wondering why your Julia code is slow, consider: https://docs.julialang.org/en/v1/manual/performance-tips/index.html
Can you explain further on “statistically not very robust”?
It means that a sample size of 1 (which is what you’re going to get) is neither meaningful nor conclusive, as you’re introducing unknown system state (among other variables) into your experiment. BenchmarkTools gets around this by running the code multiple times and provides statistics across its entire run.
What will happen if I measure compilation time?
You will get a tremendously pessimistic result, as the second run of a function is significantly faster:
julia> using LightGraphs
julia> g = Graph(100,200)
{100, 200} undirected simple Int64 graph
julia> @time betweenness_centrality(g); # timing compilation since this is the first run
0.292876 seconds (602.88 k allocations: 32.449 MiB)
julia> @time betweenness_centrality(g); # already compiled
0.002362 seconds (15.46 k allocations: 2.427 MiB)
You are right. I tested “Dates” with my code in Julia. And “Datetime” in Python. The first time Julia is slowly. But even despite of the compilation time it is faster than Python.
I did this test only 4 times. I would like to do it at leats 1000 times. Do you know of a similar tool like “BenchmarkTools” but in Python?